Tag Archives: virtualbox

GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel

Today I would like to present an article about setting up GlusterFS cluster on a FreeBSD system with Ansible and GNU Parallel tools.

gluster-logo.png

To cite Wikipedia “GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks.” The GlusterFS page describes it similarly “Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.”

Here are its advantages:

  • Scales to several petabytes.
  • Handles thousands of clients.
  • POSIX compatible.
  • Uses commodity hardware.
  • Can use any ondisk filesystem that supports extended attributes.
  • Accessible using industry standard protocols like NFS and SMB.
  • Provides replication/quotas/geo-replication/snapshots/bitrot detection.
  • Allows optimization for different workloads.
  • Open Source.

Lab Setup

It will be entirely VirtualBox based and it will consist of 6 hosts. To not create 6 same FreeBSD installations I used 12.0-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

I will use different prompts depending on where the command is executed to make the article more readable. Also then there is ‘%‘ at the prompt then a regular user is needed and if there is ‘#‘ at the prompt then a superuser is needed.

gluster1 #    // command run on the gluster1 node
gluster* #    // command run on all gluster nodes
client #      // command run on gluster client
vbhost %      // command run on the VirtualBox host

Here is the list of the machines for the GlusterFS cluster:

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

virtualbox-freebsd-gluster-host.jpg

Here is the configuration of the NAT Network on VirtualBox.

virtualbox-nat-network.jpg

The cloned/copied FreeBSD-12.0-RELEASE-amd64.vmdk image will need to have different UUIDs so we will use VBoxManage internalcommands sethduuid command to achieve this.

vbhost % for I in $( seq 6 ); do cp FreeBSD-12.0-RELEASE-amd64.vmdk    vbox_GlusterFS_${I}.vmdk; done
vbhost % for I in $( seq 6 ); do VBoxManage internalcommands sethduuid vbox_GlusterFS_${I}.vmdk; done

To start the whole GlusterFS environment on VirtualBox use these commands.

vbhost % VBoxManage list vms | grep GlusterFS
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Check which GlusterFS machines are running.

vbhost % VBoxManage list runningvms | grep GlusterFS
vbhost %

Starting of the machines in VirtualBox Headless mode in parallel.

vbhost % VBoxManage list vms \
           | grep GlusterFS \
           | awk -F \" '{print $2}' \
           | while read I; do VBoxManage startvm "${I}" --type headless & done

After that command you should see these machines running.

vbhost % VBoxManage list runningvms
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for gluster1 host.

gluster1 # cat /etc/rc.conf
hostname=gluster1
ifconfig_DEFAULT="inet 10.0.10.11/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD GlusterFS machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

gluster1 # grep '^PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin yes
# service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the gluster1 machine will be available on port 2211, the gluster2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

vbhost % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden VBoxNetNAT 57622 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 57622 19 tcp4   *:2211                *:*
vermaden VBoxNetNAT 57622 20 tcp4   *:2212                *:*
vermaden VBoxNetNAT 57622 21 tcp4   *:2213                *:*
vermaden VBoxNetNAT 57622 22 tcp4   *:2214                *:*
vermaden VBoxNetNAT 57622 23 tcp4   *:2215                *:*
vermaden VBoxNetNAT 57622 24 tcp4   *:2216                *:*
vermaden VBoxNetNAT 57622 28 tcp4   *:2240                *:*
vermaden VBoxNetNAT 57622 29 tcp4   *:9140                *:*
vermaden VBoxNetNAT 57622 30 tcp4   *:2220                *:*
root     sshd       96791 4  tcp4   *:22                  *:*

I think the corelation between IP address and the port on the host is obvious πŸ™‚

Here is the list of the machines with ports on localhost:

10.0.10.11 gluster1 2211
10.0.10.12 gluster2 2212
10.0.10.13 gluster3 2213
10.0.10.14 gluster4 2214
10.0.10.15 gluster5 2215
10.0.10.16 gluster6 2216

To connect to such machine from the VirtualBox host system you will need this command:

vbhost % ssh -l root localhost -p 2211

To not type that every time you need to login to gluster1 let’s make come changes to ~/.ssh/config file for convenience. This way it will be possible to login in very short way.

vbhost % ssh gluster1

Here is the modified ~/.ssh/config file.

vbhost % cat ~/.ssh/config
# GENERAL
  StrictHostKeyChecking no
  LogLevel              quiet
  KeepAlive             yes
  ServerAliveInterval   30
  VerifyHostKeyDNS      no

# ALL HOSTS SETTINGS
Host *
  StrictHostKeyChecking no
  Compression           yes

# GLUSTER
Host gluster1
  User root
  Hostname 127.0.0.1
  Port 2211

Host gluster2
  User root
  Hostname 127.0.0.1
  Port 2212

Host gluster3
  User root
  Hostname 127.0.0.1
  Port 2213

Host gluster4
  User root
  Hostname 127.0.0.1
  Port 2214

Host gluster5
  User root
  Hostname 127.0.0.1
  Port 2215

Host gluster6
  User root
  Hostname 127.0.0.1
  Port 2216

I assume that you already have some SSH keys generated (with ~/.ssh/id_rsa as private key) so lets remove the need to type password on each SSH login.

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster1
Password for root@gluster1:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster2
Password for root@gluster2:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster3
Password for root@gluster3:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster4
Password for root@gluster4:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster5
Password for root@gluster5:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster6
Password for root@gluster6:

Ansible Setup

As we already have SSH integration now we will configure Ansible to connect to out ‘localhost’ ports for FreeBSD machines.

Here is the Ansible’s hosts file.

vbhost % cat hosts
[gluster]
gluster1 ansible_port=2211 ansible_host=127.0.0.1 ansible_user=root
gluster2 ansible_port=2212 ansible_host=127.0.0.1 ansible_user=root
gluster3 ansible_port=2213 ansible_host=127.0.0.1 ansible_user=root
gluster4 ansible_port=2214 ansible_host=127.0.0.1 ansible_user=root
gluster5 ansible_port=2215 ansible_host=127.0.0.1 ansible_user=root
gluster6 ansible_port=2216 ansible_host=127.0.0.1 ansible_user=root

[gluster:vars]
ansible_python_interpreter=/usr/local/bin/python2.7

Here is the listing of these machines using ansible command.

vbhost % ansible -i hosts --list-hosts gluster
  hosts (6):
    gluster1
    gluster2
    gluster3
    gluster4
    gluster5
    gluster6

Lets verify that out Ansible setup works correctly.

vbhost % ansible -i hosts -m raw -a 'echo' gluster
gluster1 | CHANGED | rc=0 >>



gluster3 | CHANGED | rc=0 >>



gluster2 | CHANGED | rc=0 >>



gluster5 | CHANGED | rc=0 >>



gluster4 | CHANGED | rc=0 >>



gluster6 | CHANGED | rc=0 >>

It works as desired.

We are not able to use Ansible modules other then Raw because by default Python is not installed on FreeBSD as shown below.

vbhost % ansible -i hosts -m ping gluster
gluster1 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster2 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster4 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster5 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster3 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster6 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}

We need to get Python installed on FreeBSD.

We will partially use Ansible for this and partially the GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
pkg: Error fetching http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly/Latest/pkg.txz: No address record
A pre-built version of pkg could not be found for your system.
Consider changing PACKAGESITE or installing it from ports: 'ports-mgmt/pkg'.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly, please wait...

… we forgot about setting up DNS in the FreeBSD machines, let’s fix that.

It is as easy as executing echo nameserver 1.1.1.1 > /etc/resolv.conf command on each FreeBSD machine.

Lets verify what input will be sent to GNU Parallel before executing it.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done
ssh gluster1 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster2 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster3 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster4 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster5 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster6 'echo nameserver 1.1.1.1 > /etc/resolv.conf'

Looks reasonable, lets engage the GNU Parallel then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

We will now verify that the DNS is configured properly on the FreeBSD machines.

vbhost % for I in $( jot 6 ); do echo -n "gluster${I} "; ssh gluster${I} 'cat /etc/resolv.conf'; done
gluster1 nameserver 1.1.1.1
gluster2 nameserver 1.1.1.1
gluster3 nameserver 1.1.1.1
gluster4 nameserver 1.1.1.1
gluster5 nameserver 1.1.1.1
gluster6 nameserver 1.1.1.1

Verification of the DNS by using ping(8) to test Internet connectivity.

vbhost % for I in $( jot 6 ); do echo; echo "gluster${I}"; ssh gluster${I} host freebsd.org; done

gluster1
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster2
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster3
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster4
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster5
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster6
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

The DNS resolution works properly, now we will switch from the default quarterly pkg(8) repository to the latest one which has more frequent updates as the name suggests. We will need to use sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf command on each FreeBSD machine.

Verification what will be sent to GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done
ssh gluster1 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster2 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster3 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster4 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster5 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster6 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'

Let’s send the command to FreeBSD machines then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh $I 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

As shown below the latest repository is configured in the /etc/pkg/FreeBSD.conf file on each FreeBSD machine.

vbhost % ssh gluster3 tail -7 /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We may now get back to Python.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
ssh gluster1 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster2 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster3 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster4 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster5 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster6 env ASSUME_ALWAYS_YES=yes pkg install python

… and execution on the FreeBSD machines with GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \ 
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/156.0s

The Python packages and its dependencies are installed.

vbhost % ssh gluster3 pkg info
gettext-runtime-0.19.8.1_2     GNU gettext runtime libraries and programs
indexinfo-0.3.1                Utility to regenerate the GNU info page index
libffi-3.2.1_3                 Foreign Function Interface
pkg-1.10.5_5                   Package manager
python-2.7_3,2                 "meta-port" for the default version of Python interpreter
python2-2_3                    The "meta-port" for version 2 of the Python interpreter
python27-2.7.15                Interpreted object-oriented programming language
readline-7.0.5                 Library for editing command lines as they are typed

Now with Ansible Ping module works as desired.

% ansible -i hosts -m ping gluster
gluster1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster6 | SUCCESS => {
"changed": false,
"ping": "pong"
}

GlusterFS Volume Options

GlusterFS has a lot of options to setup the volume. They are described in the GlusterFS Administration Guide in the Setting up GlusterFS Volumes part. Here they are:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

From all of the above still supported the Dispersed volume seems to be the best choice. Like Minio Dispersed volumes are based on erasure codes.

As we have 6 servers we will use 4 + 2 setup which is logical RAID6 against these 6 servers. This means that we will be able to lost 2 of them without service outage. This also means that if we will upload 100 MB file to our volume we will use 150 MB of space across these 6 servers with 25 MB on each node.

We can visualize this as following ASCII diagram.

+-----------+ +-----------+ +-----------+ +-----------+ +-----------+ +-----------+
|  gluster1 | |  gluster2 | |  gluster3 | |  gluster4 | |  gluster5 | |  gluster6 |
|           | |           | |           | |           | |           | |           |
|    brick1 | |    brick2 | |    brick3 | |    brick4 | |    brick5 | |    brick6 |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
      |             |             |             |             |             |
    25|MB         25|MB         25|MB         25|MB         25|MB         25|MB
      |             |             |             |             |             |
      +-------------+-------------+------+------+-------------+-------------+
                                         |
                                      100|MB
                                         |
                                     +---+---+
                                     | file0 |
                                     +-------+

Deploy GlusterFS Cluster

We will use gluster-setup.yml as our Ansible playbook.

Lets create something for the start, for example to always install the latest Python package.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

We will now execute it.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=2    changed=0    unreachable=0    failed=0
gluster2                   : ok=2    changed=0    unreachable=0    failed=0
gluster3                   : ok=2    changed=0    unreachable=0    failed=0
gluster4                   : ok=2    changed=0    unreachable=0    failed=0
gluster5                   : ok=2    changed=0    unreachable=0    failed=0
gluster6                   : ok=2    changed=0    unreachable=0    failed=0

We just installed Python on these machines no update was needed.

As we will be creating cluster we need to add time synchronization between the nodes of the cluster. We will use mose obvious solution – the ntpd(8) daemon that is in the FreeBSD base system. These lines are added to our gluster-setup.yml playbook to achieve this goal

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

After executing the playbook again with the ansible-playbook -i hosts gluster-setup.yml command we will see additional output as the one shown below.

TASK [Enable NTPD Service] ************************************************
changed: [gluster2]
changed: [gluster1]
changed: [gluster4]
changed: [gluster5]
changed: [gluster3]
changed: [gluster6]

TASK [Start NTPD Service] ******************************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster1]
changed: [gluster3]
changed: [gluster6]

Random verification of the NTP service.

vbhost % ssh gluster1 ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.freebsd.pool. .POOL.          16 p    -   64    0    0.000    0.000   0.000
 ntp.ifj.edu.pl  10.0.2.4         3 u    1   64    1  119.956  -345759  32.552
 news-archive.ic 229.30.220.210   2 u    -   64    1   60.533  -345760  21.104

Now we need to install GlusterFS on FreeBSD machines – the glusterfs package.

We will add appropriate section to the playbook.

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

You can add more then one package to the pkgng Ansible module – for example I have also added ncdu package.

You can read more about pkgng Ansible module by typing the ansible-doc pkgng command or at least its short version with -s argument.

vbhost % ansible-doc -s pkgng
- name: Package manager for FreeBSD >= 9.0
  pkgng:
      annotation:            # A comma-separated list of keyvalue-pairs of the form `[=]'. A `+' denotes adding
                               an annotation, a `-' denotes removing an annotation, and `:' denotes
                               modifying an annotation. If setting or modifying annotations, a value
                               must be provided.
      autoremove:            # Remove automatically installed packages which are no longer needed.
      cached:                # Use local package base instead of fetching an updated one.
      chroot:                # Pkg will chroot in the specified environment. Can not be used together with `rootdir' or `jail'
                               options.
      jail:                  # Pkg will execute in the given jail name or id. Can not be used together with `chroot' or `rootdir'
                               options.
      name:                  # (required) Name or list of names of packages to install/remove.
      pkgsite:               # For pkgng versions before 1.1.4, specify packagesite to use for downloading packages. If not
                               specified, use settings from `/usr/local/etc/pkg.conf'. For newer
                               pkgng versions, specify a the name of a repository configured in
                               `/usr/local/etc/pkg/repos'.
      rootdir:               # For pkgng versions 1.5 and later, pkg will install all packages within the specified root directory.
                               Can not be used together with `chroot' or `jail' options.
      state:                 # State of the package. Note: "latest" added in 2.7

You can read more about this particular module on the following – https://docs.ansible.com/ansible/latest/modules/pkgng_module.html – Ansible page.

We will now add GlusterFS nodes to the /etc/hosts file and add autoboot_delay=1 parameter to the /boot/loader.conf file so our systems will boot 9 seconds faster as 10 is the default delay setting.

Here is out gluster-setup.yml Ansible playbook this far.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

  - name: Add Nodes to /etc/hosts File
    blockinfile:
      path: /etc/hosts
      block: |
        10.0.10.11 gluster1
        10.0.10.12 gluster2
        10.0.10.13 gluster3
        10.0.10.14 gluster4
        10.0.10.15 gluster5
        10.0.10.16 gluster6

  - name: Add autoboot_delay to /boot/loader.conf File
    lineinfile:
      path: /boot/loader.conf
      line: autoboot_delay=1
      create: yes

Here is the result of the execution of this playbook.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

TASK [Install Latest GlusterFS Package] ****************************************
ok: [gluster2]
ok: [gluster1]
ok: [gluster3]
ok: [gluster5]
ok: [gluster4]
ok: [gluster6]

TASK [Add Nodes to /etc/hosts File] ********************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster1]
changed: [gluster6]

TASK [Enable GlusterFS Service] ************************************************
changed: [gluster1]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster5]
changed: [gluster6]

TASK [Add autoboot_delay to /boot/loader.conf File] ****************************
changed: [gluster3]
changed: [gluster2]
changed: [gluster5]
changed: [gluster1]
changed: [gluster4]
changed: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=6    changed=3    unreachable=0    failed=0
gluster2                   : ok=6    changed=3    unreachable=0    failed=0
gluster3                   : ok=6    changed=3    unreachable=0    failed=0
gluster4                   : ok=6    changed=3    unreachable=0    failed=0
gluster5                   : ok=6    changed=3    unreachable=0    failed=0
gluster6                   : ok=6    changed=3    unreachable=0    failed=0

Let’s check that FreeBSD machines can now ping each other by names.

vbhost % ssh gluster6 cat /etc/hosts
# LOOPBACK
127.0.0.1      localhost localhost.my.domain
::1            localhost localhost.my.domain

# BEGIN ANSIBLE MANAGED BLOCK
10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6
# END ANSIBLE MANAGED BLOCK

vbhost % ssh gluster1 ping -c 1 gluster3
PING gluster3 (10.0.10.13): 56 data bytes
64 bytes from 10.0.10.13: icmp_seq=0 ttl=64 time=1.924 ms

--- gluster3 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.924/1.924/1.924/0.000 ms

… and our /boot/loader.conf file.

vbhost % ssh gluster4 cat /boot/loader.conf
autoboot_delay=1

Now we need to create directories for GlusterFS data. Without better idea we will use /data directory with /data/colume1 as the directory for volume1 and bricks will be put as /data/volume1/brick1 dirs. In this setup I will use just one brick per server but in production environment you would probably use one brick per physical disk.

Here is the playbook command we will use to create these directories on FreeBSD machines.

  - name: Create brick* Directories for volume1
    raw: mkdir -p /data/volume1/brick` hostname | grep -o -E '[0-9]+' `

After executing it with ansible-playbook -i hosts gluster-setup.yml command the directories has beed created.

vbhost % ssh gluster2 find /data -ls | column -t
2247168  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data
2247169  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data/volume2
2247170  8  drwxr-xr-x  2  root  wheel  512  Dec  28  17:48  /data/volume2/brick2


We now need to add glusterd_enable=YES to the /etc/rc.conf file on GlusterFS nodes and then start the GlsuterFS service.

This is the snippet we will add to our playbook.

  - name: Enable GlusterFS Service
    raw: sysrc glusterd_enable=YES

  - name: Start GlusterFS Service
    service:
      name: glusterd
      state: started

Let’s make quick random verification.

vbhost % ssh gluster4 service glusterd status
glusterd is running as pid 2684.

Now we need to proceed to the last part of the GlusterFS setup – create the volume.

We will do this from the gluster1 – the 1st node of the GlusterFS cluster.

First we need to peer probe other nodes.

gluster1 # gluster peer probe gluster1
peer probe: success. Probe on localhost not needed
gluster1 # gluster peer probe gluster2
peer probe: success.
gluster1 # gluster peer probe gluster3
peer probe: success.
gluster1 # gluster peer probe gluster4
peer probe: success.
gluster1 # gluster peer probe gluster5
peer probe: success.
gluster1 # gluster peer probe gluster6
peer probe: success.

Then we can create the volume. We will need to use force option to because for our example setup we will use directories on the root partition.

gluster1 # gluster volume create volume1 \
             disperse-data 4 \
             redundancy 2 \
             transport tcp \
             gluster1:/data/volume1/brick1 \
             gluster2:/data/volume1/brick2 \
             gluster3:/data/volume1/brick3 \
             gluster4:/data/volume1/brick4 \
             gluster5:/data/volume1/brick5 \
             gluster6:/data/volume1/brick6 \
             force
volume create: volume1: success: please start the volume to access data

We can now start the volume1 GlsuerFS volume.

gluster1 # gluster volume start volume1
volume start: volume1: success

gluster1 # gluster volume status volume1
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/data/volume1/brick1         N/A       N/A        N       N/A
Brick gluster2:/data/volume1/brick2         N/A       N/A        N       N/A
Brick gluster3:/data/volume1/brick3         N/A       N/A        N       N/A
Brick gluster4:/data/volume1/brick4         N/A       N/A        N       N/A
Brick gluster5:/data/volume1/brick5         N/A       N/A        N       N/A
Brick gluster6:/data/volume1/brick6         N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       644
Self-heal Daemon on gluster6                N/A       N/A        N       643
Self-heal Daemon on gluster5                N/A       N/A        N       647
Self-heal Daemon on gluster2                N/A       N/A        N       645
Self-heal Daemon on gluster3                N/A       N/A        N       645
Self-heal Daemon on gluster4                N/A       N/A        N       645

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

gluster1 # gluster volume info volume1

Volume Name: volume1
Type: Disperse
Volume ID: 68cf9607-16bc-4550-9b6b-16a5c7656f51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/volume1/brick1
Brick2: gluster2:/data/volume1/brick2
Brick3: gluster3:/data/volume1/brick3
Brick4: gluster4:/data/volume1/brick4
Brick5: gluster5:/data/volume1/brick5
Brick6: gluster6:/data/volume1/brick6
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Here are contents of currently unused/empty brick.

gluster1 # find /data/volume1/brick1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check

The 6-node GlusterFS cluster is now complete and volume1 available to use.

Alternative

The GlusterFS’s documentation Quick Start Guide also suggests using Ansible to deploy and manage GlusterFS with gluster-ansible repository or gluster-ansible-cluster but they have below requirements.

  • Ansible version 2.5 or above.
  • GlusterFS version 3.2 or above.

As GlusterFS on FreeBSD is at 3.11.1 version I did not used them.

FreeBSD Client

We will now use another VirtualBox machine – also based on the same FreeBSD 12.0-RELEASE image – to create FreeBSD Client machine that will mount our volume1 volume.

We will need to install glusterfs package with pkg(8) command. Then we will use mount_glusterfs command to mount the volume. Keep in mind that in order to mount GlusterFS volume the FUSE (fuse.ko kernel module is needed.

client # pkg install glusterfs

client # kldload fuse

client # mount_glusterfs 10.0.10.11:volume1 /mnt

client # echo $?
0

client # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/fuse on /mnt (fusefs, local, synchronous)

client # ls /mnt
ls: /mnt: Socket is not connected

It is mounted but does not work. The solution to this problem is to add appropriate /etc/hosts entries to the GlusterFS nodes.

client # cat /etc/hosts
::1                     localhost localhost.my.domain
127.0.0.1               localhost localhost.my.domain

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Lets mount it again now with needed /etc/hosts entries.

client # umount /mnt

client # mount_glusterfs gluster1:volume1 /mnt

client # ls /mnt
client #

We now have our GlusterFS volume properly mounted and working on the FreeBSD Client machine.

Lets write some file there with dd(8) to see how it works.

client # dd  FILE bs=1m count=100 status=progress
  73400320 bytes (73 MB, 70 MiB) transferred 1.016s, 72 MB/s
100+0 records in
100+0 records out
104857600 bytes transferred in 1.565618 secs (66975227 bytes/sec)

Let’s see how it looks in the brick directory.

gluster1 # ls -lh /data/volume1/brick1
total 25640
drw-------  10 root  wheel   512B Jan  3 18:31 .glusterfs
-rw-r--r--   2 root  wheel    25M Jan  3 18:31 FILE

gluster1 # find /data
/data/
/data/volume1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/xattrop/xattrop-aed814f1-0eb0-46a1-b569-aeddf5048e06
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check
/data/volume1/brick1/.glusterfs/ac
/data/volume1/brick1/.glusterfs/ac/b4
/data/volume1/brick1/.glusterfs/11
/data/volume1/brick1/.glusterfs/11/50
/data/volume1/brick1/.glusterfs/11/50/115043ca-420f-48b5-af05-c9552db2e585
/data/volume1/brick1/FILE

Linux Client

I will also show how to mount GlusterFS volume on the Red Hat clone CentOS in its latest 7.6 incarnation. It will require glusterfs-fuse package installation.

[root@localhost ~]# yum install glusterfs-fuse


[root@localhost ~]# rpm -q --filesbypkg glusterfs-fuse | grep /sbin/mount.glusterfs
glusterfs-fuse            /sbin/mount.glusterfs

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt
Mount failed. Please check the log file for more details.

Similarly like with FreeBSD Client the /etc/hosts entries are needed.

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt

[root@localhost ~]# ls /mnt
FILE

[root@localhost ~]# mount
10.0.10.11:volume1 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

With apropriate /etc/hosts entries it works as desired. We see the FILE file generated fron the FreeBSD Client machine.

GlusterFS Cluster Redundancy

After messing with the volume and creating and deleting various files I also tested its redundancy. In theory this RAID6 equivalent protection should protect us from the loss of two of six servers. After shutdown of two VirtualBox machines the volume is still available and ready to use.

Closing Thougts

Pity that FreeBSD does not provide more modern GlusterFS package as currently only 3.11.1 version is available.

EOF
Advertisements

MongoDB Replica Set Cluster on Oracle Linux

Meet MongoDB.

mongodb-logo

MongoDB is a free and open-source cross-platform document database with scalability and flexibility. Classified as NoSQL database MongoDB uses JSON like documents with schemas. MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to use.

Today I will show you how to install and configure MongoDB Cluster Replica Set with 4 data nodes and 1 arbiter node. Minimal replica set configuration is three members and largest replica set can support only 12 members in total. The replica set must have odd number of voting members. As I always used FreeBSD or its forks for various setups I will today use latest Oracle Linux 7.5 for this example.

Architecture

Below is the POOR MAN’S ASCII ARCHITECT diagram showing that five node MongoDB replica set cluster installation.

mongo0 [DATA]       |   |
/var/lib/mongo -- > |   |
                    |   |
mongo1 [DATA]       | M |
/var/lib/mongo -- > | o |
                    | n |
mongo2 [DATA]       | g |
/var/lib/mongo -- > | o |
                    |   |
mongo3 [DATA]       | D |
/var/lib/mongo -- > | B |
                    |   |
mongo4 [ARBITER]    |   |
/var/lib/mongo -- x |   |

The MongoDB project visualizes this little differently, as show below.

mongodb-replica-set-four-members-one-arbiter

VirtualBox

For the convenience of the setup we will use VirtualBox virtual machines for our MongoDB replica set cluster setup. Below is list of VirtualBox virtual machines used in the setup.

virtualbox-mongodb-list

We will use VirtualBox NAT Network connectivity for the virtual machines communication. Below are settings for the NAT Network we will use here.

virtualbox-mongodb-nat-01

virtualbox-mongodb-nat-02

virtualbox-mongodb-nat-03-forward

virtualbox-mongodb-nat-04-vm

We can verify that ports forwarding is working with sockstat command from the host FreeBSD system.

host % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
vermaden VBoxNetNAT 13138 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 13138 19 tcp4   *:2200                *:*
vermaden VBoxNetNAT 13138 20 tcp4   *:2201                *:*
vermaden VBoxNetNAT 13138 21 tcp4   *:2202                *:*
vermaden VBoxNetNAT 13138 22 tcp4   *:2203                *:*
vermaden VBoxNetNAT 13138 23 tcp4   *:2204                *:*
root     sshd       986   4  tcp4   *:22                  *:*

The table below lists all MongoDB nodes and their IP addresses and roles that we will use.

NODE    ADDRESS        ROLE
mongo0  10.0.10.10/24  DATA
mongo1  10.0.10.11/24  DATA
mongo2  10.0.10.12/24  DATA
mongo3  10.0.10.13/24  DATA
mongo4  10.0.10.14/24  ARBITER (does not contain data)

The ‘last’ mongo4 node will be have the ARBITER role while mongo0 to mongo3 nodes will have DATA role. Similarly like with the Distributed Object Storage with Minio on FreeBSD You can place two nodes (mongo0 and mongo2 for example) in primary datacenter, other two nodes (mongo1 and mongo3 for example) in secondary datacenter and mongo4 with ARBITER node in the third datacenter or other location available from the primary and secondary datacenters.

To not do the same thing five times I installed the first node (mongo0) then updated it and made some preconfigurations, then powered it off and I cloned into the remaining nodes. Remember to regenerate the MAC addresses in VirtualBox interface in the cloning process for these machines to omit ‘strange’ connectivity problems πŸ™‚

After cloning the only files that needs to be modified are these:

  • /etc/sysconfig/network-scripts/ifcfg-eth0
  • /etc/hostname

Below is an example for mongo4 machine.

[root@mongo4 ~]# grep 4$ /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/hostname 
/etc/sysconfig/network-scripts/ifcfg-eth0:IPADDR=10.0.10.14
/etc/sysconfig/network-scripts/ifcfg-eth0:PREFIX=24
/etc/hostname:mongo4

To distinguish commands I type on the host system and mongoX virtual machines I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host % command

Command on the mongoX virtual machine.

[root@mongoX ~]# command

Linux

I have installed Oracle Linux 7.5 on a single primary / partition on XFS filesystem as show on the images below, this is Minimal install with statically configured network connection in VirtualBox NAT Network mode.

oracle-linux-7.5-install-01

oracle-linux-7.5-install-02

oracle-linux-7.5-install-03

If that will make life easier for anybody, here is the /root/anaconda-ks.cfg file.

[root@mongo0 ~]# cat /root/anaconda-ks.cfg 
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
repo --name="Server-HighAvailability" --baseurl=file:///run/install/repo/addons/HighAvailability
repo --name="Server-ResilientStorage" --baseurl=file:///run/install/repo/addons/ResilientStorage
# Use CDROM installation media
cdrom
# Use graphical install
graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=static --device=enp0s3 --gateway=10.0.10.1 --ip=10.0.10.10 --nameserver=1.1.1.1 --netmask=255.255.255.0 --ipv6=auto --activate
network  --hostname=mongo0

# Root password
rootpw --iscrypted $6$EzciOQdLpJD8IJTv$wnAvxjgP.JluqsRAPu/mbTv8Upvg02AAb4.T5zBi6VMGdNfNsiRw7Gp0FyRtwAGW5Orpqc1nRwtRFwLQDJU/l.
# System services
services --disabled="chronyd"
# System timezone
timezone Europe/Warsaw --isUtc --nontp
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part / --fstype="xfs" --ondisk=sda --size=16383 --label=ROOT

%packages
@^minimal
@core

%end

%addon com_redhat_kdump --disable --reserve-mb='auto'

%end

%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end

After the first boot I will yum update the system to the latest version.

[root@mongo0 ~]# yum update
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package initscripts.x86_64 0:9.49.41-1.0.1.el7 will be updated
---> Package initscripts.x86_64 0:9.49.41-1.0.3.el7 will be an update
---> Package kernel-uek.x86_64 0:4.1.12-124.14.1.el7uek will be installed
---> Package kernel-uek-firmware.noarch 0:4.1.12-124.14.1.el7uek will be installed
---> Package krb5-libs.x86_64 0:1.15.1-18.el7 will be updated
---> Package krb5-libs.x86_64 0:1.15.1-19.el7 will be an update
---> Package selinux-policy.noarch 0:3.13.1-192.0.1.el7 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.0.1.el7_5.3 will be an update
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7_5.3 will be an update
---> Package tzdata.noarch 0:2018c-1.el7 will be updated
---> Package tzdata.noarch 0:2018d-1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                        Arch          Version                       Repository         Size
====================================================================================================
Installing:
 kernel-uek                     x86_64        4.1.12-124.14.1.el7uek        ol7_UEKR4          46 M
 kernel-uek-firmware            noarch        4.1.12-124.14.1.el7uek        ol7_UEKR4         2.5 M
Updating:
 initscripts                    x86_64        9.49.41-1.0.3.el7             ol7_latest        437 k
 krb5-libs                      x86_64        1.15.1-19.el7                 ol7_latest        747 k
 selinux-policy                 noarch        3.13.1-192.0.1.el7_5.3        ol7_latest        452 k
 selinux-policy-targeted        noarch        3.13.1-192.0.1.el7_5.3        ol7_latest        6.6 M
 tzdata                         noarch        2018d-1.el7                   ol7_latest        480 k

Transaction Summary
====================================================================================================
Install  2 Packages
Upgrade  5 Packages

Total download size: 57 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/initscripts-9.49.41-1.0.3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for initscripts-9.49.41-1.0.3.el7.x86_64.rpm is not installed
(1/7): initscripts-9.49.41-1.0.3.el7.x86_64.rpm                              | 437 kB  00:00:02     
(2/7): selinux-policy-3.13.1-192.0.1.el7_5.3.noarch.rpm                      | 452 kB  00:00:01     
(3/7): krb5-libs-1.15.1-19.el7.x86_64.rpm                                    | 747 kB  00:00:04     
(4/7): tzdata-2018d-1.el7.noarch.rpm                                         | 480 kB  00:00:02     
Public key for kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch.rpm is not installed  00:01:09 ETA 
(5/7): kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch.rpm                 | 2.5 MB  00:00:13     
(6/7): selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch.rpm             | 6.6 MB  00:00:22     
(7/7): kernel-uek-4.1.12-124.14.1.el7uek.x86_64.rpm                          |  46 MB  00:01:19     
----------------------------------------------------------------------------------------------------
Total                                                               732 kB/s |  57 MB  00:01:19     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 Package    : 7:oraclelinux-release-7.5-1.0.3.el7.x86_64 (@anaconda/7.5)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : selinux-policy-3.13.1-192.0.1.el7_5.3.noarch                                    1/12 
  Updating   : initscripts-9.49.41-1.0.3.el7.x86_64                                            2/12 
  Installing : kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch                               3/12 
  Installing : kernel-uek-4.1.12-124.14.1.el7uek.x86_64                                        4/12 
  Updating   : selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch                           5/12 
  Updating   : tzdata-2018d-1.el7.noarch                                                       6/12 
  Updating   : krb5-libs-1.15.1-19.el7.x86_64                                                  7/12 
  Cleanup    : selinux-policy-targeted-3.13.1-192.0.1.el7.noarch                               8/12 
  Cleanup    : selinux-policy-3.13.1-192.0.1.el7.noarch                                        9/12 
  Cleanup    : tzdata-2018c-1.el7.noarch                                                      10/12 
  Cleanup    : krb5-libs-1.15.1-18.el7.x86_64                                                 11/12 
  Cleanup    : initscripts-9.49.41-1.0.1.el7.x86_64                                           12/12 
  Verifying  : kernel-uek-4.1.12-124.14.1.el7uek.x86_64                                        1/12 
  Verifying  : selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch                           2/12 
  Verifying  : kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch                               3/12 
  Verifying  : initscripts-9.49.41-1.0.3.el7.x86_64                                            4/12 
  Verifying  : selinux-policy-3.13.1-192.0.1.el7_5.3.noarch                                    5/12 
  Verifying  : krb5-libs-1.15.1-19.el7.x86_64                                                  6/12 
  Verifying  : tzdata-2018d-1.el7.noarch                                                       7/12 
  Verifying  : initscripts-9.49.41-1.0.1.el7.x86_64                                            8/12 
  Verifying  : tzdata-2018c-1.el7.noarch                                                       9/12 
  Verifying  : krb5-libs-1.15.1-18.el7.x86_64                                                 10/12 
  Verifying  : selinux-policy-3.13.1-192.0.1.el7.noarch                                       11/12 
  Verifying  : selinux-policy-targeted-3.13.1-192.0.1.el7.noarch                              12/12 

Installed:
  kernel-uek.x86_64 0:4.1.12-124.14.1.el7uek   kernel-uek-firmware.noarch 0:4.1.12-124.14.1.el7uek  

Updated:
  initscripts.x86_64 0:9.49.41-1.0.3.el7                                                            
  krb5-libs.x86_64 0:1.15.1-19.el7                                                                  
  selinux-policy.noarch 0:3.13.1-192.0.1.el7_5.3                                                    
  selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7_5.3                                           
  tzdata.noarch 0:2018d-1.el7                                                                       

Complete!
[root@mongo0 ~]#

As on of the packages was kernel I will now reboot the system.

[root@mongo0 ~]# reboot

After reboot there will be two kernels 4.x kernels installed, the original one that came on the ISO image and the latest one, lets remove the unneeded older version.

[root@mongo0 ~]# rpm -qa | grep kernel | sort
kernel-3.10.0-862.el7.x86_64
kernel-tools-3.10.0-862.el7.x86_64
kernel-tools-libs-3.10.0-862.el7.x86_64
kernel-uek-4.1.12-112.16.4.el7uek.x86_64
kernel-uek-4.1.12-124.14.1.el7uek.x86_64
kernel-uek-firmware-4.1.12-112.16.4.el7uek.noarch
kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch

[root@mongo0 ~]# uname -r
4.1.12-124.14.1.el7uek.x86_64

[root@mongo0 ~]# rpm -e kernel-uek-firmware-4.1.12-112.16.4.el7uek.noarch kernel-uek-4.1.12-112.16.4.el7uek.x86_64

[root@mongo0 ~]# rpm -qa | grep kernel | sort
kernel-3.10.0-862.el7.x86_64
kernel-tools-3.10.0-862.el7.x86_64
kernel-tools-libs-3.10.0-862.el7.x86_64
kernel-uek-4.1.12-124.14.1.el7uek.x86_64
kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch

Now we will add the MongoDB repository.

[root@mongo0 ~]# cat > /etc/yum.repos.d/mongodb-org-3.6.repo << __EOF
> [mongodb-org-3.6]
> name=MongoDB Repository
> baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/3.6/x86_64/
> gpgcheck=1
> enabled=1
> gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
> __EOF

[root@mongo0 ~]# cat /etc/yum.repos.d/mongodb-org-3.6.repo
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc

We will then install the MongoDB package.

[root@mongo0 ~]# yum install mongodb-org
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package mongodb-org.x86_64 0:3.6.4-1.el7 will be installed
--> Processing Dependency: mongodb-org-tools = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-shell = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-server = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-mongos = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Running transaction check
---> Package mongodb-org-mongos.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-server.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-shell.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-tools.x86_64 0:3.6.4-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                     Arch            Version                 Repository                Size
====================================================================================================
Installing:
 mongodb-org                 x86_64          3.6.4-1.el7             mongodb-org-3.6          5.8 k
Installing for dependencies:
 mongodb-org-mongos          x86_64          3.6.4-1.el7             mongodb-org-3.6           12 M
 mongodb-org-server          x86_64          3.6.4-1.el7             mongodb-org-3.6           20 M
 mongodb-org-shell           x86_64          3.6.4-1.el7             mongodb-org-3.6           12 M
 mongodb-org-tools           x86_64          3.6.4-1.el7             mongodb-org-3.6           46 M

Transaction Summary
====================================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 90 M
Installed size: 265 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mongodb-org-3.6/packages/mongodb-org-3.6.4-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY
Public key for mongodb-org-3.6.4-1.el7.x86_64.rpm is not installed
(1/5): mongodb-org-3.6.4-1.el7.x86_64.rpm                                    | 5.8 kB  00:00:01     
(2/5): mongodb-org-mongos-3.6.4-1.el7.x86_64.rpm                             |  12 MB  00:00:32     
(3/5): mongodb-org-server-3.6.4-1.el7.x86_64.rpm                             |  20 MB  00:00:57     
(4/5): mongodb-org-shell-3.6.4-1.el7.x86_64.rpm                              |  12 MB  00:00:30     
(5/5): mongodb-org-tools-3.6.4-1.el7.x86_64.rpm                              |  46 MB  00:01:06     
----------------------------------------------------------------------------------------------------
Total                                                               740 kB/s |  90 MB  00:02:04     
Retrieving key from https://www.mongodb.org/static/pgp/server-3.6.asc
Importing GPG key 0x91FA4AD5:
 Userid     : "MongoDB 3.6 Release Signing Key "
 Fingerprint: 2930 adae 8caf 5059 ee73 bb4b 5871 2a22 91fa 4ad5
 From       : https://www.mongodb.org/static/pgp/server-3.6.asc
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mongodb-org-shell-3.6.4-1.el7.x86_64                                             1/5 
  Installing : mongodb-org-tools-3.6.4-1.el7.x86_64                                             2/5 
  Installing : mongodb-org-mongos-3.6.4-1.el7.x86_64                                            3/5 
  Installing : mongodb-org-server-3.6.4-1.el7.x86_64                                            4/5 
Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service.
  Installing : mongodb-org-3.6.4-1.el7.x86_64                                                   5/5 
  Verifying  : mongodb-org-3.6.4-1.el7.x86_64                                                   1/5 
  Verifying  : mongodb-org-server-3.6.4-1.el7.x86_64                                            2/5 
  Verifying  : mongodb-org-mongos-3.6.4-1.el7.x86_64                                            3/5 
  Verifying  : mongodb-org-tools-3.6.4-1.el7.x86_64                                             4/5 
  Verifying  : mongodb-org-shell-3.6.4-1.el7.x86_64                                             5/5 

Installed:
  mongodb-org.x86_64 0:3.6.4-1.el7                                                                  

Dependency Installed:
  mongodb-org-mongos.x86_64 0:3.6.4-1.el7          mongodb-org-server.x86_64 0:3.6.4-1.el7         
  mongodb-org-shell.x86_64 0:3.6.4-1.el7           mongodb-org-tools.x86_64 0:3.6.4-1.el7          

Complete!
[root@mongo0 ~]#

Network Manager

As we do not need Network Manager we will disable it entirely.

[root@mongo0 ~]# systemctl list-unit-files | grep -i network
dbus-org.freedesktop.NetworkManager.service   enabled 
NetworkManager-dispatcher.service             enabled 
NetworkManager-wait-online.service            enabled 
NetworkManager.service                        enabled 
network-online.target                         static  
network-pre.target                            static  
network.target                                static 

[root@mongo0 ~]# systemctl stop NetworkManager

[root@mongo0 ~]# systemctl disable NetworkManager 
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

[root@mongo0 ~]# systemctl stop NetworkManager-wait-online

[root@mongo0 ~]# systemctl disable NetworkManager-wait-online 
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

[root@mongo0 ~]# systemctl stop NetworkManager-dispatcher

[root@mongo0 ~]# systemctl disable NetworkManager-dispatcher

[root@mongo0 ~]# systemctl list-unit-files | grep -i network
NetworkManager-dispatcher.service             disabled
NetworkManager-wait-online.service            disabled
NetworkManager.service                        disabled
network-online.target                         static  
network-pre.target                            static  
network.target                                static

SELinux

We do not need SELinux either.

[root@mongo0 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      29

[root@mongo0 ~]# setenforce 0

[root@mongo0 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      29

[root@mongo0 ~]# cat /etc/sysconfig/selinux 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 


[root@mongo0 ~]# sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config

[root@mongo0 ~]# cat /etc/sysconfig/selinux 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Firewall

… and iptables to the disabled state.

[root@mongo0 ~]# systemctl stop firewalld

[root@mongo0 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@mongo0 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Old Deterministic Naming Scheme

With the introduction of RHEL 7.x (as Oracle Linux and CentOS systems are just ‘dumb’ clones) the old network interfaces naming scheme eth0, eth1 is gone. In 7.x the interfaces will now be named in a “Predictable Interface Names” which makes these names very unpredictable … fortunately there is a way to move back to old ‘unpredictable’ RHEL 6.x naming scheme with net.ifnames=0 biosdevname=0 options in the GRUB_CMDLINE_LINUX variable in the /etc/default/grub file. Lets do it then.

[root@mongo0 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

[root@mongo0 ~]# cp /etc/default/grub /etc/default/grub.ORG

[root@mongo0 ~]# vi /etc/default/grub

[root@mongo0 ~]# diff -u /etc/default/grub.ORG /etc/default/grub
--- /etc/default/grub.ORG       2018-04-24 10:56:03.094000000 +0200
+++ /etc/default/grub   2018-04-24 10:56:13.668000000 +0200
@@ -3,5 +3,5 @@
 GRUB_DEFAULT=saved
 GRUB_DISABLE_SUBMENU=true
 GRUB_TERMINAL_OUTPUT="console"
-GRUB_CMDLINE_LINUX="rhgb quiet"
+GRUB_CMDLINE_LINUX="rhgb quiet net.ifnames=0 biosdevname=0"
 GRUB_DISABLE_RECOVERY="true"

[root@mongo0 ~]# grub2-mkconfig
Generating grub configuration file ...
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
set pager=1

if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
if [ x$feature_timeout_style = xy ] ; then
  set timeout_style=menu
  set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
  set timeout=5
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###

### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
  source ${prefix}/user.cfg
  if [ -n "${GRUB2_PASSWORD}" ]; then
    set superusers="root"
    export superusers
    password_pbkdf2 root ${GRUB2_PASSWORD}
  fi
fi
### END /etc/grub.d/01_users ###

### BEGIN /etc/grub.d/10_linux ###
Found linux image: /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
menuentry 'Oracle Linux Server (4.1.12-124.14.1.el7uek.x86_64 with Unbreakable Enterprise Kernel) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.1.12-124.14.1.el7uek.x86_64-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
}
Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
menuentry 'Oracle Linux Server (3.10.0-862.el7.x86_64 with Linux) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-862.el7.x86_64-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-3.10.0-862.el7.x86_64 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-3.10.0-862.el7.x86_64.img
}
Found linux image: /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41
Found initrd image: /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
menuentry 'Oracle Linux Server (0-rescue-141943d2370a45fe9230ea2413f80d41 with Linux) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-141943d2370a45fe9230ea2413f80d41-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_ppc_terminfo ###
### END /etc/grub.d/20_ppc_terminfo ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
done
[root@mongo0 ~]# 

[root@mongo0 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41
Found initrd image: /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
done
[root@mongo0 ~]#

Network

As Anaconda installer got the award for the worst installer [Citation Needed] we will now have to clean up the installer generated configuration files. The interface is still enp0s3 instead eth0 because we haven’t done reboot yet.

Below are files generated by Anaconda installer.

[root@mongo0 ~]# ip li 
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:dd:93:cf brd ff:ff:ff:ff:ff:ff

[root@mongo0 ~]# cat /etc/sysconfig/network
# Created by anaconda

[root@mongo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="3eba4d78-3392-49ed-807e-70fe6bc134b7"
DEVICE="enp0s3"
ONBOOT="yes"
IPADDR="10.0.10.10"
PREFIX="24"
IPV6_PRIVACY="no"
GATEWAY="10.0.10.1"
DNS1="1.1.1.1"

Lets make some cleanup and ‘migration’ to the old ethX naming scheme.

[root@mongo0 ~]# mv /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# cat !$ | tr -d \" > ASD

[root@mongo0 ~]# mv -f !$ /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-enp0s3 > /etc/sysconfig/network

[root@mongo0 ~]# cat /etc/sysconfig/network
GATEWAY=10.0.10.1

[root@mongo0 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.10.10
PREFIX=24

[root@mongo0 ~]# echo nameserver 1.1.1.1 > /etc/resolv.conf 

[root@mongo0 ~]# diff -u /root/ifcfg-eth0.ORG /etc/sysconfig/network-scripts/ifcfg-eth0
--- /root/ifcfg-eth0.ORG        2018-04-24 11:00:17.493000000 +0200
+++ /etc/sysconfig/network-scripts/ifcfg-eth0   2018-04-24 11:00:57.914000000 +0200
@@ -1,20 +1,8 @@
 TYPE=Ethernet
-PROXY_METHOD=none
-BROWSER_ONLY=no
 BOOTPROTO=none
-DEFROUTE=yes
-IPV4_FAILURE_FATAL=no
-IPV6INIT=yes
-IPV6_AUTOCONF=yes
-IPV6_DEFROUTE=yes
-IPV6_FAILURE_FATAL=no
-IPV6_ADDR_GEN_MODE=stable-privacy
-NAME=enp0s3
-UUID=3eba4d78-3392-49ed-807e-70fe6bc134b7
-DEVICE=enp0s3
+IPV6INIT=no
+NAME=eth0
+DEVICE=eth0
 ONBOOT=yes
 IPADDR=10.0.10.10
 PREFIX=24
-IPV6_PRIVACY=no
-GATEWAY=10.0.10.1
-DNS1=1.1.1.1

We will now reboot the system to get the eth0 interface.

[root@mongo0 ~]# reboot

After the reboot the interface is plain old eth0 device.

[root@mongo0 ~]# ip li
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:dd:93:cf brd ff:ff:ff:ff:ff:ff

Vim

If You will work with PuTTY with these hosts this may (or not) make your work more pleasant.

[root@mongo0 ~]# echo 'set mouse-=a' >> /root/.vimrc

Filesystem

We will disable atime for performance reasons in the /etc/fstab file.

[root@mongo0 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 24 00:23:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f /                       xfs     defaults        0 0

[root@mongo0 ~]# sed -i -e s@defaults@rw,noatime,nodiratime@g /etc/fstab

[root@mongo0 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 24 00:23:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f /                       xfs     rw,noatime,nodiratime        0 0

Lets see what output will give us the mount command on a modern Linux system with just one single / filesystem …

[root@mongo0 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=746928k,nr_inodes=186732,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=153040k,mode=700)

Horrible mess.

We can limit that output to something readable.

[root@mongo0 ~]# mount -t xfs
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

[root@mongo0 ~]# mount | grep ^/
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

Better.

Time Daemon

As with every cluster we will have to install and configure the time daemon, ntp for example.

First installation …

[root@mongo0 ~]# yum install ntp
Loaded plugins: ulninfo
mongodb-org-3.6                                                                         | 2.5 kB  00:00:00     
ol7_UEKR4                                                                               | 1.2 kB  00:00:00     
ol7_latest                                                                              | 1.4 kB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package ntp.x86_64 0:4.2.6p5-28.0.1.el7 will be installed
--> Processing Dependency: ntpdate = 4.2.6p5-28.0.1.el7 for package: ntp-4.2.6p5-28.0.1.el7.x86_64
--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-28.0.1.el7.x86_64
--> Running transaction check
---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
---> Package ntpdate.x86_64 0:4.2.6p5-28.0.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================
 Package                       Arch                Version                      Repository            Size
===========================================================================================================
Installing:
 ntp                           x86_64              4.2.6p5-28.0.1.el7           ol7_latest           548 k
Installing for dependencies:
 autogen-libopts               x86_64              5.18-5.el7                   ol7_latest            65 k
 ntpdate                       x86_64              4.2.6p5-28.0.1.el7           ol7_latest            85 k

Transaction Summary
===========================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 698 k
Installed size: 1.6 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): autogen-libopts-5.18-5.el7.x86_64.rpm                                        |  65 kB  00:00:03     
(2/3): ntpdate-4.2.6p5-28.0.1.el7.x86_64.rpm                                        |  85 kB  00:00:00     
(3/3): ntp-4.2.6p5-28.0.1.el7.x86_64.rpm                                            | 548 kB  00:00:05     
-----------------------------------------------------------------------------------------------------------
Total                                                                      136 kB/s | 698 kB  00:00:05     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.                               
  Installing : autogen-libopts-5.18-5.el7.x86_64                                                        1/3 
  Installing : ntpdate-4.2.6p5-28.0.1.el7.x86_64                                                        2/3 
  Installing : ntp-4.2.6p5-28.0.1.el7.x86_64                                                            3/3 
  Verifying  : ntpdate-4.2.6p5-28.0.1.el7.x86_64                                                        1/3 
  Verifying  : autogen-libopts-5.18-5.el7.x86_64                                                        2/3 
  Verifying  : ntp-4.2.6p5-28.0.1.el7.x86_64                                                            3/3 

Installed:
  ntp.x86_64 0:4.2.6p5-28.0.1.el7                                                                                                   

Dependency Installed:
  autogen-libopts.x86_64 0:5.18-5.el7                              ntpdate.x86_64 0:4.2.6p5-28.0.1.el7                             

Complete!
[root@mongo0 ~]#

… and configuration.

[root@mongo0 ~]# cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g"

[root@mongo0 ~]# cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.ORG

[root@mongo0 ~]# vi /etc/sysconfig/ntpd

[root@mongo0 ~]# cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g -x"

[root@mongo0 ~]# diff -u /etc/sysconfig/ntpd.ORG /etc/sysconfig/ntpd
--- /etc/sysconfig/ntpd.ORG     2018-04-24 15:22:46.215788131 +0200
+++ /etc/sysconfig/ntpd 2018-04-24 15:22:31.464368114 +0200
@@ -1,2 +1,2 @@
 # Command line options for ntpd
-OPTIONS="-g"
+OPTIONS="-g -x"
[root@mongo0 ~]# 

[root@mongo0 ~]# systemctl start ntpd

[root@mongo0 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

[root@mongo0 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+jamesl.tk       130.149.17.8     2 u    3   64    1   99.361  -66.467  38.578
+sunsite.icm.edu 194.146.251.100  2 u    2   64    1   79.675  -55.486  12.338
+maggo.info      124.216.164.14   2 u    3   64    1   84.604  -65.630  14.804
*91-211-101-141. 5.226.98.186     2 u    2   64    1   74.071  -62.252  14.619
[root@mongo0 ~]#

Attack of the Clones

As our mongo0 machine install is finished we can now power it off and clone it into the remainng mongo1/mongo2/mongo3/mongo4 nodes.

SSH

Lets setup the keys to not have to type password everytime we want to do anything.

host % ssh-copy-id -i ~/.ssh/id_rsa.pub -p 2200 root@localhost
Password for root@mongo0:

host % ssh -p 2200 root@localhost
[root@mongo0 ~]#

Cluster SSH

For the convenience you may wish to use Cluster SSH to connect to all nodes for the tasks that are the same on all nodes.

Here is the Cluster SSH cssh command used to connect to our MongoDB cluster.

host % cssh \
  root@localhost:2200 \
  root@localhost:2201 \
  root@localhost:2202 \
  root@localhost:2203 \
  root@localhost:2204 \

… or like that.

host % cssh root@localhost:220{0,1,2,3,4}

… and here is how it looks like.

clusterssh-mongodb
If there are taks to be made only on DATA nodes you may connect only to 4 nodes with Cluster SSH of course.

Environment

As we have our clones ready lets start them.

host % for I in 0 1 2 3 4; do VBoxManage startvm mongo${I} --type headless; done
Waiting for VM "mongo0" to power on...
VM "mongo0" has been successfully started.
Waiting for VM "mongo1" to power on...
VM "mongo1" has been successfully started.
Waiting for VM "mongo2" to power on...
VM "mongo2" has been successfully started.
Waiting for VM "mongo3" to power on...
VM "mongo3" has been successfully started.
Waiting for VM "mongo4" to power on...
VM "mongo4" has been successfully started.

As we have our nodes installed and started lets check the connectivity between them.

[root@mongo0 ~]# awk '/mongo/ {print $1}' /etc/hosts | xargs -n1 ping -c 1 -t 3 | grep loss
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms

Lets verify that MongoDB is installed.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost which mongod; done
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod

Now we will configure /etc/hosts file.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost 'cat >> /etc/hosts << __EOF
10.0.10.10 mongo0
10.0.10.11 mongo1
10.0.10.12 mongo2
10.0.10.13 mongo3
10.0.10.14 mongo4
__EOF'
done

Lets verify it.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost "grep mongo${I} /etc/hosts"; done
10.0.10.10 mongo0
10.0.10.11 mongo1
10.0.10.12 mongo2
10.0.10.13 mongo3
10.0.10.14 mongo4

MongoDB

It is now (at last) time to configure MongoDB, lets start with the configuration files.

Configuration Files

Create the config files for the MongoDB data nodes.

host % for I in 0 1 2 3; do ssh -p 220${I} root@localhost "cat > /etc/mongod.conf << __EOF
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /var/lib/mongo
  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 27017
  bindIp: localhost,10.0.10.1${I}

replication:
   replSetName: \"replica0\"

__EOF"
done

Create the config file for the MongoDB arbiter node.

host % for I in 4; do ssh -p 220${I} root@localhost "cat > /etc/mongod.conf << __EOF
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /var/lib/mongo
  journal.enabled: false # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 27017
  bindIp: localhost,10.0.10.1${I}

replication:
   replSetName: \"replica0\"

__EOF"
done

Lets verify these configuration files.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H bindIp /etc/mongod.conf; done  
/etc/mongod.conf:  bindIp: localhost,10.0.10.10
/etc/mongod.conf:  bindIp: localhost,10.0.10.11
/etc/mongod.conf:  bindIp: localhost,10.0.10.12
/etc/mongod.conf:  bindIp: localhost,10.0.10.13
/etc/mongod.conf:  bindIp: localhost,10.0.10.14
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H /var /etc/mongod.conf; echo; done | column -t 
/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H DIFFERENCE /etc/mongod.conf; done 
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: false # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

Lets start the MongoDB nodes, if MongoDB is already running with the default config (not ours) then restart it.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod stop; done
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod start; done     
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service

Lets verify that MongoDB is running on our nodes with the new onfiguration.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost pgrep mongod; done                             
735
744
738
736
748
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost ss -ln4; echo; done
Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.10:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.11:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.12:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.13:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.14:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*

Replica Set

We may now configure our MongoDB Replica Set Cluster.

We will use replica0 name for the replica set.

We will paste these instructions into the MongoDB prompt on the first node (mongo0) to configure replica set.

use admin
rs.initiate(
  {
    _id : "replica0",
    members: [
      { _id: 0, host: "mongo0:27017" },
      { _id: 1, host: "mongo1:27017" },
      { _id: 2, host: "mongo2:27017" },
      { _id: 3, host: "mongo3:27017" }
    ]
  }
)

Lets do it then. As You will paste it you will see that prompt changed to replica0:SECONDARY> string. Hit [ENTER] once a second and after about 15-20 seconds it should change to replica0:PRIMARY> as this will be currently the node role in the cluster after forming it.

% ssh root@localhost -p 2200
Last login: Tue Apr 24 14:39:06 2018 from 10.0.10.2
[root@mongo0 ~]# mongo
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
Server has startup warnings: 
2018-04-24T14:39:33.161+0200 I CONTROL  [initandlisten] 
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] 
> use admin
switched to db admin
> rs.initiate(
...   {
...     _id : "replica0",
...     members: [
...       { _id: 0, host: "mongo0:27017" },
...       { _id: 1, host: "mongo1:27017" },
...       { _id: 2, host: "mongo2:27017" },
...       { _id: 3, host: "mongo3:27017" }
...     ]
...   }
... )
{
        "ok" : 1,
        "operationTime" : Timestamp(1524574334, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524574334, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:PRIMARY> 
replica0:PRIMARY>

We will not create admin (less powerful) and root (as the name suggests can do anything) users on our new MongoDB cluster.

We will paste these instructions into the MongoDB prompt on the PRIMARY node (currnetly mongo0) to add users.

use admin
db.createUser(
  {
    user: "admin",
    pwd: "ADMIN-PASSWORD",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)
use admin
db.createUser(
  {
    user: "root",
    pwd: "ROOT-PASSWORD",
    roles:["root"]
  }
)

Lets do it then.

replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> db.createUser(
...   {
...     user: "admin",
...     pwd: "ADMIN-PASSWORD",
...     roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
...   }
... )
Successfully added user: {
        "user" : "admin",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}
replica0:PRIMARY>
replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> db.createUser(
...   {
...     user: "root",
...     pwd: "ROOT-PASSWORD",
...     roles:["root"]
...   }
... )
Successfully added user: { "user" : "root", "roles" : [ "root" ] }
replica0:PRIMARY>

We can now exit from the MongoDB prompt.

replica0:PRIMARY> exit
[root@mongo0 ~]#

We will not stop the MongoDB services and enable authorization and also configure shared keyfile.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod stop; done   
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service

Lets add needed configuration files settings.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost "cat >> /etc/mongod.conf << __EOF
security:
  authorization: enabled
  keyFile: /etc/mongod.conf.key

__EOF"
done

Now lets generate a new key …

host % dd  /dev/null | sha256
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf

… and put it into the nodes as /etc/mongod.conf.key file.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost 'echo 66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf > /etc/mongod.conf.key'; done
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost chmod 600 /etc/mongod.conf.key; done
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost chown mongod:mongod /etc/mongod.conf.key; done

Lets verify out new key is there.

% for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost cat /etc/mongod.conf.key; done
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf

We can now start the MongoDB with new settings.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod start; done                    
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service

We can now connect to our MongoDB cluster with root user.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY>

Lets see how MongoDB rs.conf() function shows our configuration (yet before ARBITER node role added).

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.conf()
{
        "_id" : "replica0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "mongo0:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "mongo1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "mongo2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "mongo3:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5adf287d597df99256d11280")
        }
}
replica0:PRIMARY>

Lets see how MongoDB rs.status() function shows our configuration (yet before ARBITER node role added).

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.status()
{
        "set" : "replica0",
        "date" : ISODate("2018-04-24T13:14:14.653Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo0:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 111,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1524575557, 1),
                        "electionDate" : ISODate("2018-04-24T13:12:37Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongo1:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 104,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.546Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.892Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "mongo2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 101,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.546Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.863Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "mongo3:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 99,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.548Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.725Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1524575648, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575648, 1),
                "signature" : {
                        "hash" : BinData(0,"v6kIdsFS93nZcf2hJ/EYTrVjsso="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

Arbiter

We can now add the ARBITER role on mongo4 node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.addArb("mongo4:27017")
{
        "ok" : 1,
        "operationTime" : Timestamp(1524575694, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575694, 1),
                "signature" : {
                        "hash" : BinData(0,"Bkvnl5fskD4NLvA1qhaU+BYLFNo="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

The ARBITER role has different prompt also.

[root@mongo4 ~]# mongo
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
replica0:ARBITER>

Lets see how MongoDB rs.config() function shows our configuration after adding the ARBITER node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.config()
{
        "_id" : "replica0",
        "version" : 2,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "mongo0:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "mongo1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "mongo2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "mongo3:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 4,
                        "host" : "mongo4:27017",
                        "arbiterOnly" : true,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5adf287d597df99256d11280")
        }
}
replica0:PRIMARY>

Lets see how MongoDB rs.status() function shows our configuration after adding the ARBITER node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.status()
{
        "set" : "replica0",
        "date" : ISODate("2018-04-24T13:19:31.989Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo0:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 428,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "electionTime" : Timestamp(1524575557, 1),
                        "electionDate" : ISODate("2018-04-24T13:12:37Z"),
                        "configVersion" : 2,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongo1:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 421,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.139Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.455Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 2,
                        "name" : "mongo2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 418,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.145Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.571Z"),
                        "pingMs" : NumberLong(2),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 3,
                        "name" : "mongo3:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 416,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.145Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.445Z"),
                        "pingMs" : NumberLong(2),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 4,
                        "name" : "mongo4:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 215,
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.111Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.730Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 2
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1524575969, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575969, 1),
                "signature" : {
                        "hash" : BinData(0,"+nUr+dY6LufEIZIjfzwKRw4cQpM="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

We can also check what roles are configured by default on our MongoDB cluster.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> show roles
{
        "role" : "dbAdmin",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "dbOwner",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "enableSharding",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "read",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "readWrite",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "userAdmin",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
replica0:PRIMARY>

… and users.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> show users
{
        "_id" : "admin.admin",
        "user" : "admin",
        "db" : "admin",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}
{
        "_id" : "admin.root",
        "user" : "root",
        "db" : "admin",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
replica0:PRIMARY>

Backup

Below are simple backup commands for the completeness of the article.

As ‘local‘ database is not being backed up by default we will have to backup it exclusively with separate command.

backup % mongodump \
           --host "rs0/mongo0:27017,mongo1:27017,mongo2:27017,mongo3:27017" \
           --username root \
           --password ROOT-PASSWORD \
           --authenticationDatabase admin \
           --db local \
           --out /backup/replica0-local

backup % mongodump \
           --host "rs0/mongo0:27017,mongo1:27017,mongo2:27017,mongo3:27017" \
           --username root \
           --password ROOT-PASSWORD \
           --authenticationDatabase admin \
           --out /backup/replica0-dat

Pretty

To auto format the query response add this to your ~/.mongorc.js file.

DBQuery.prototype._prettyShell = true
DBQuery.prototype.unpretty = function () {
  this._prettyShell = false;
  return this;
}

I would also set how much results will the .find() print before asking to type the result in the ~/.mongorc.js file.

DBQuery.shellBatchSize = 50

Final ~/.mongorc.js file.

% cat ~/.mongorc.js
DBQuery.shellBatchSize = 50
DBQuery.prototype._prettyShell = true
DBQuery.prototype.unpretty = function () {
  this._prettyShell = false;
  return this;
}

You will find other useful tips in the MongoDB: Tips & Tricks blog post.

Performance

As it seems MongoDB is not always the fastest option as PostgreSQL database also can work with JSON data type.

benchmark-mongodb24-postgresql94

benchmark-persister-timeline

Check these two below for more information and insight and decide which database is best for your needs.

Management

For the convenience of management I would also suggest adding MongoDB Ops Manager, but this is not covered in this (already big) article.

mongodb-ops-manager-deploy

EOF

Β