Tag Archives: rsync

ZFS on SMR Drives

The ZFS filesystem (more often called OpenZFS lately – as the project name) is a great filesystem for many purposes. From home or desktop/laptop solutions to enterprise offerings. Traditional disk drives have non overlapping magnetic tracks parallel to each other. These are PMR disks (Perpendicular Magnetic Recording). Hard disk drive manufacturers – to pack even more data into the same size platters – also offer SMR disks. In SMR disks data tracks are written to overlap part of previously written track – this results in narrower tracks and higher density. I will try to visualize this difference below using my favorite Enterprise Architect ASCII Edition software.

 PMR                    SMR

[xxx][___][___][___]   [xx[__[__[___]
[___][xxx][___][___]   [__[xx[__[___]
[___][___][xxx][___]   [__[__[xx[___]
[___][___][___][xxx]   [__[__[__[xxx]
[___][xxx][___][xxx]   [__[xx[__[xxx]
[xxx][___][___][xxx]   [xx[__[__[xxx]

12345678901234567890   12345678901234

I marked the filled blocks on both disks with xxx marks. As you can compare the below ‘size’ of the taken place the same data on SMR disk takes less physical space then on traditional PMR drives. This comes at a price through. Writes are little ‘crippled’ comparing to PMR drives. Especially heavy and random I/O writes are ‘problematic’ and slower on SMR drives … but it does not mean they are useless.

disk

For the backup or clone purposes they are more then enough. I personally use SMR drives for my backup solutions. Its just about price/performance ratio.

Here are mine backup solutions based on the SMR drives:

Speed

How ZFS behaves on SMR drives? Very well I would say. ZFS tries to pack as much random I/O into sequential with its ZFS features – described in detail in the zpool-features(7) man page for example.

I recently tried ZFS on top of GELI encrypted partition on a 5 TB external USB SMR drive. I needed to copy little more then 3 TB of data there. I used rsync(1) for that purpose. These are the arguments I use for my rsync(1) jobs.

% rsync --modify-window=1 -l -t -r -D -v -S -H --force    \
        --progress --no-whole-file --numeric-ids --delete \
        /files/ /media/external/files/

Of course I do not write all these options by hand – I just a script wrapper for that – rsync-delete.sh – available on my scripts page.

As I started to copy files on the drive I watched the write speeds using iostat(8) and zpool-iostat(8) tools. I expected quite slow operation but even with the enabled zstd compression and AES-XTS 256bit GELI encryption I got pretty decent results.

Here are the iostat(8) results. Each line means average of 10 minutes (600 seconds). Check the speeds for da0 drive below.

% iostat 600
       tty            ada0             ada1              da0             cpu
 tin  tout KB/t  tps  MB/s  KB/t  tps  MB/s  KB/t  tps  MB/s  us ni sy in id
   1     1  513  120  59.9  29.5   39   1.1   742   65  46.8   4  8 17  2 69
   0     2  615   94  56.6  19.1   22   0.4   751   68  49.8   1  3 14  1 82
   0     0  561  106  57.9  17.9   20   0.4   760   70  52.0   1  2 14  1 82
   0     0 1015   57  56.8  18.4   16   0.3   769   68  50.9   1  3 15  1 81
   0     0 1017   57  56.3  18.5   16   0.3   757   68  50.6   1  3 14  1 81
   0     1  752   72  53.0  16.6   23   0.4   765   67  50.1   1  1 13  0 85
   0     0 1014   51  50.1  16.5   21   0.3   723   68  48.3   1  1 13  0 86
   0     0 1012   51  50.2  19.8   18   0.3   743   68  49.2   1  1 12  0 86

And here are the zpool-iostat(8) results.

% zpool iostat POOL 600
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
POOL        3.18T  1.37T      7     56  53.5K  40.7M
POOL        3.20T  1.34T      0     57  9.01K  41.4M
POOL        3.22T  1.33T      0     47  3.29K  32.3M
POOL        3.24T  1.31T      0     47  5.59K  33.9M
POOL        3.25T  1.29T      0     43  3.39K  24.3M
POOL        3.27T  1.28T      0     42  3.01K  25.5M
POOL        3.28T  1.27T      0     44  3.14K  26.8M
POOL        3.29T  1.26T      0     42  3.49K  23.9M

The drive was attached over USB 3.0 port so there was not 35 MB/s limitation from USB 2.0 port. I would say that the results are very decent and consistent.

Tuning

There are several settings that can help you squeeze maximum from these SMR drives on ZFS filesystem.

First are ZFS pool settings. You want the latest zstd compression to save some space. Also better compression means less physical bytes need to be written to the drive so less I/O operations. You should also turn atime into off state as it will not be needed. You should also increase recordsize to something really big like 1m (1 megabyte) so you will get higher compressratio and also will need to have less metadata for more ZFS blocks. Keep in mind that ZFS will still use variable block size and not only the 1m maximum. If something is smaller (like 100k) then it would take for example 80k (after applied zstd compression). You will not waste 920k here πŸ™‚

Keep in mind that most newer and larger drives use 4k blocks (instead of 512b). Sometimes its 512e method which means that drive firmware will ‘present’ device with 512b blocks while underneath these eight 512k blocks just lay down on a single 4k block. For these reasons its important to keep in mind several things.

When adding new partitions with gpart(8) remember to align them to 4k with -a 4k argument.

# gpart add -t freebsd-zfs -a 4k da0

Next – when initializing the geli(8) encryption layer – make sure you add -s 4096 argument.

# geli init -s 4096 /dev/da0p1

The last thing is ZFS pool creation with proper ashift property – it can not be changed later. On FreeBSD UNIX its done that way:

# sysctl vfs.zfs.min_auto_ashift=12
# zpool create POOL da0
# zdb -C POOL | grep ashift
                ashift: 12

If you are curious what 12 means then below table will help you:

ASHIFT  BLOCKSIZE
     9  512b
    10  1k
    11  2k
    12  4k
    13  8k

Last but not least is the redundant_metadata option. By default its at all setting but its desired to set it into the most state. Do you need redundant metadata? I think not. When your single drive will fail the redundant metadata would not help and if your ZFS pool have some redundancy level like raidz or mirror then redundant metadata is also not needed because its just ‘normally’ redundant being spread across several disks.

Keep in mind that ZFS resilver process on some of these SMR drives can take forever. Some people from Reddit reported that they successfully resilvered their ZFS pools with SMR drives but that does not have to be the case for all SMR drives out there. You can also check Ars Technica tests of resilver on SMR disks.

Here is the summary of ZFS tunables suggested – you will find in depth description of all of them in the zfsprops(7) man page.

# zfs set redundant_metadata=most POOL
# zfs set compression=zstd        POOL
# zfs set atime=off               POOL
# zfs set recordsize=1m           POOL

In theory the TRIM operations upon deletion would create additional unwanted ‘stress’ for SMR drives which would mean that TRIM operations should be disabled for on non-SSD drives and you can disable them entirely on the ZFS pool level … but.

TRIM commands issued by the operating system allows SMR HDD internal controller to get the information that certain areas/blocks on that SMR HDD plates are no longer in use. It means that writes to such areas could be performed without slow read-modify-write pattern.

This means we are leaving the autotrim option as on (enabled) for SMR drives.

# zpool autotrim=on POOL

Also – if needed – you can manually trigger the TRIM operations with this command.

# zpool trim POOL
# zpool status POOL
  pool: POOL
 state: ONLINE
  scan: scrub repaired 0B in 02:17:22 with 0 errors on Sun May  8 05:18:22 2022
config:

        NAME          STATE     READ WRITE CKSUM
        POOL          ONLINE       0     0     0
          da0p1.eli   ONLINE       0     0     0  (trimming)

errors: No known data errors


By default the TRIM commands are executed at 64 rate on FreeBSD. You can limit them to 1 and still have them enabled with following sysctl(8) tunable.

# sysctl vfs.zfs.vdev.trim_max_active=1

If you want to make it survive across reboots then put it into the /etc/sysctl.conf file.

Logic could suggest that simpler/older filesystems such as FreeBSD UFS for example could be more suitable solution for SMR drives … but the reality shows that not so much. Check this Reddit thread for example – Appalling Performance on External USB SMR Drive – to name just one.

Hope this article will help you get most of your SMR drives.

Regards.

EOF

FreeBSD Cluster with Pacemaker and Corosync

I always missed ‘proper’ cluster software for FreeBSD systems. Recently I got to run several Pacemaker/Corosync based clusters on Linux systems. I thought how to make similar high availability solutions on FreeBSD and I was really shocked when I figured out that both Pacemaker and Corosync tools are available in the FreeBSD Ports and packages as net/pacemaker2 and net/corosync2 respectively.

In this article I will check how well Pacemaker and Corosync cluster works on FreeBSD.

pacemaker

There are many definitions of a cluster. One that I like the most is that a cluster is a system that is still redundant after losing one of its nodes (is still a cluster). This means that 3 nodes is a minimum for a cluster by that definition. The two node clusters are quite problematic because of their biggest exposure to the split brain problem. That is why often in the two node clusters additional devices or systems are added to make sure that this split brain does not happen. For example one can add third node without any resources or services just as a ‘witness’ role. Other way is to add a shared disk resource that will serve the same purpose and often its a raw volume with SCSI-3 Persistent Reservation mechanism used.

Lab Setup

As usual it will be entirely VirtualBox based and it will consist of 3 hosts. To not create 3 same FreeBSD installations I used 12.1-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

Here is the list of the machines for the GlusterFS cluster:

  • 10.0.10.111 node1
  • 10.0.10.112 node2
  • 10.0.10.113 node3

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

machine

Here is the configuration of the NAT Network on VirtualBox.

nat-network-01

nat-network-02

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration inside each VM. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for node1 host.

root@node1:~ # cat /etc/rc.conf
hostname=node1
ifconfig_em0="inet 10.0.10.111/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

root@node1:~ # grep PermitRootLogin /etc/ssh/sshd_config
PermitRootLogin yes

root@node1:~ # service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the node1 machine will be available on port 2211, the node2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

nat-network-03-sockstat

nat-network-04-ssh

To connect to such machine from the VirtualBox host system you will need this command:

vboxhost % ssh -l root localhost -p 2211

Packages

As we now have ssh(1) connectivity we need to add needed packages. To make our VMs resolve DNS queries we need to add one last thing. We will also switch to ‘quarterly’ branch of the pkg(8) packages.

root@node1:~ # echo 'nameserver 1.1.1.1' > /etc/resolv.conf
root@node1:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

Remember to repeat these two upper commands on node2 and node3 systems.

Now we will add Pacemaker and Corosync packages.

root@node1:~ # pkg install pacemaker2 corosync2 crmsh

root@node2:~ # pkg install pacemaker2 corosync2 crmsh

root@node3:~ # pkg install pacemaker2 corosync2 crmsh

These are messages both from pacemaker2 and corosync2 that we need to address.

Message from pacemaker2-2.0.4:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

======================================================================

Message from corosync2-2.4.5_1:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

We need to change the kern.ipc.maxsockbuf parameter. Lets do it then.

root@node1:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node1:~ # service sysctl restart

root@node2:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node2:~ # service sysctl restart

root@node3:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node3:~ # service sysctl restart

Lets check what binaries come with these packages.

root@node1:~ # pkg info -l pacemaker2 | grep bin
        /usr/local/sbin/attrd_updater
        /usr/local/sbin/cibadmin
        /usr/local/sbin/crm_attribute
        /usr/local/sbin/crm_diff
        /usr/local/sbin/crm_error
        /usr/local/sbin/crm_failcount
        /usr/local/sbin/crm_master
        /usr/local/sbin/crm_mon
        /usr/local/sbin/crm_node
        /usr/local/sbin/crm_report
        /usr/local/sbin/crm_resource
        /usr/local/sbin/crm_rule
        /usr/local/sbin/crm_shadow
        /usr/local/sbin/crm_simulate
        /usr/local/sbin/crm_standby
        /usr/local/sbin/crm_ticket
        /usr/local/sbin/crm_verify
        /usr/local/sbin/crmadmin
        /usr/local/sbin/fence_legacy
        /usr/local/sbin/iso8601
        /usr/local/sbin/pacemaker-remoted
        /usr/local/sbin/pacemaker_remoted
        /usr/local/sbin/pacemakerd
        /usr/local/sbin/stonith_admin

root@node1:~ # pkg info -l corosync2 | grep bin
        /usr/local/bin/corosync-blackbox
        /usr/local/sbin/corosync
        /usr/local/sbin/corosync-cfgtool
        /usr/local/sbin/corosync-cmapctl
        /usr/local/sbin/corosync-cpgtool
        /usr/local/sbin/corosync-keygen
        /usr/local/sbin/corosync-notifyd
        /usr/local/sbin/corosync-quorumtool

root@node1:~ # pkg info -l crmsh | grep bin
        /usr/local/bin/crm

Cluster Initialization

Now we will initialize our FreeBSD cluster.

First we need to make sure that names of the nodes are DNS resolvable.

root@node1:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node2:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node3:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3


Now we will generate the Corosync key.

root@node1:~ # corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /usr/local/etc/corosync/authkey.

root@node1:~ # echo $?
0

root@node1:~ # ls -l /usr/local/etc/corosync/authkey
-r--------  1 root  wheel  128 Sep  2 20:37 /usr/local/etc/corosync/authkey

Now the Corosync configuration file. For sure some examples were provided by the package maintainer.

root@node1:~ # pkg info -l corosync2 | grep example
        /usr/local/etc/corosync/corosync.conf.example
        /usr/local/etc/corosync/corosync.conf.example.udpu

We will take the second one as a base for our config.

root@node1:~ # cp /usr/local/etc/corosync/corosync.conf.example.udpu /usr/local/etc/corosync/corosync.conf

root@node1:~ # vi /usr/local/etc/corosync/corosync.conf
               /* LOTS OF EDITS HERE */

root@node1:~ # cat /usr/local/etc/corosync/corosync.conf

totem {
  version: 2
  crypto_cipher: aes256
  crypto_hash: sha256
  transport: udpu

  interface {
    ringnumber: 0
    bindnetaddr: 10.0.10.0
    mcastport: 5405
    ttl: 1
  }
}

logging {
  fileline: off
  to_logfile: yes
  to_syslog: no
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on

  logger_subsys {
    subsys: QUORUM
    debug: off
  }
}

nodelist {

  node {
    ring0_addr: 10.0.10.111
    nodeid: 1
  }

  node {
    ring0_addr: 10.0.10.112
    nodeid: 2
  }

  node {
    ring0_addr: 10.0.10.113
    nodeid: 3
  }

}

quorum {
  provider: corosync_votequorum
  expected_votes: 2
}

Now we need to propagate both Corosync key and config across the nodes in the cluster.

We can use some simple tools created exactly for that like net/csync2 cluster synchronization tool for example but plain old net/rsync will serve as well.

root@node1:~ # pkg install -y rsync

root@node1:~ # rsync -av /usr/local/etc/corosync/ node2:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2' (ECDSA) to the list of known hosts.
Password for root@node2:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

root@node1:~ # rsync -av /usr/local/etc/corosync/ node3:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3' (ECDSA) to the list of known hosts.
Password for root@node3:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

Now lets check that they are the same.

root@node1:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node2:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node3:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

Same.

We can now add corosync_enable=YES and pacemaker_enable=YES to the /etc/rc.conf file.

root@node1:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node1:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node2:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node2:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node3:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node3:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

Lets start these services then.

root@node1:~ # service corosync start
Starting corosync.
Sep 02 20:55:35 notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 warning [MAIN  ] Please migrate config file to nodelist.

root@node1:~ # ps aux | grep corosync
root  1695   0.0  7.9 38340 38516  -  S    20:55    0:00.40 /usr/local/sbin/corosync
root  1699   0.0  0.1   524   336  0  R+   20:57    0:00.00 grep corosync

Do the same on the node2 and node3 systems.

The Pacemaker is not yet running so that will fail.

root@node1:~ # crm status
Could not connect to the CIB: Socket is not connected
crm_mon: Error: cluster is not available on this node
ERROR: status: crm_mon (rc=102): 

We will start it now.

root@node1:~ # service pacemaker start
Starting pacemaker.

root@node2:~ # service pacemaker start
Starting pacemaker.

root@node3:~ # service pacemaker start
Starting pacemaker.

You need to give it little time to start because if you will execute crm status command right away you will get 0 nodes configured message as shown below.

root@node1:~ # crm status
Cluster Summary:
  * Stack: unknown
  * Current DC: NONE
  * Last updated: Wed Sep  2 20:58:51 2020
  * Last change:  
  * 0 nodes configured
  * 0 resource instances configured


Full List of Resources:
  * No resources

… but after a while everything is detected and works as desired.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 21:02:49 2020
  * Last change:  Wed Sep  2 20:59:00 2020 by hacluster via crmd on node2
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

The Pacemaker runs properly.

root@node1:~ # ps aux | grep pacemaker
root      1716   0.0  0.5 10844   2396  -  Is   20:58     0:00.00 daemon: /usr/local/sbin/pacemakerd[1717] (daemon)
root      1717   0.0  5.2 49264  25284  -  S    20:58     0:00.27 /usr/local/sbin/pacemakerd
hacluster 1718   0.0  6.1 48736  29708  -  Ss   20:58     0:00.75 /usr/local/libexec/pacemaker/pacemaker-based
root      1719   0.0  4.5 40628  21984  -  Ss   20:58     0:00.28 /usr/local/libexec/pacemaker/pacemaker-fenced
root      1720   0.0  2.8 25204  13688  -  Ss   20:58     0:00.20 /usr/local/libexec/pacemaker/pacemaker-execd
hacluster 1721   0.0  3.9 38148  19100  -  Ss   20:58     0:00.25 /usr/local/libexec/pacemaker/pacemaker-attrd
hacluster 1722   0.0  2.9 25460  13864  -  Ss   20:58     0:00.17 /usr/local/libexec/pacemaker/pacemaker-schedulerd
hacluster 1723   0.0  5.4 49304  26300  -  Ss   20:58     0:00.41 /usr/local/libexec/pacemaker/pacemaker-controld
root      1889   0.0  0.6 11348   2728  0  S+   21:56     0:00.00 grep pacemaker

We can check how Corosync sees its members.

root@node1:~ # corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(10.0.10.111) 
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(10.0.10.112) 
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(10.0.10.113) 
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined

… or the quorum information.

root@node1:~ # corosync-quorumtool
Quorum information
------------------
Date:             Wed Sep  2 21:00:38 2020
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          1
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
         1          1 10.0.10.111 (local)
         2          1 10.0.10.112
         3          1 10.0.10.113

The Corosync log file is filled with the following information.

root@node1:~ # cat /var/log/cluster/corosync.log
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 [1694] node1 corosync info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] Please migrate config file to nodelist.
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] The network interface [10.0.10.111] is now up.
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cmap
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cfg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cpg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Using quorum provider corosync_votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: quorum
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.111}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.112}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.113}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:4) was formed. Members joined: 1
Sep 02 20:55:35 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Members[1]: 1
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:8) was formed. Members joined: 2
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] This node is within the primary component and will provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] Members[2]: 1 2
Sep 02 20:58:14 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:19 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:12) was formed. Members joined: 3
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync notice  [QUORUM] Members[3]: 1 2 3
Sep 02 20:58:19 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.

Here is the configuration.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync

As we will not be configuring the STONITH mechanism we will disable it.

root@node1:~ # crm configure property stonith-enabled=false

New configuraion with STONITH disabled.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

The STONITH configuration is out of scope of this article but properly configured STONITH looks like that.

stonith

First Service

We will now configure our first highly available service – a classic – a floating IP address πŸ™‚

root@node1:~ # crm configure primitive IP ocf:heartbeat:IPaddr2 params ip=10.0.10.200 cidr_netmask="24" op monitor interval="30s"

Lets check how it behaves.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
primitive IP IPaddr2 \
        params ip=10.0.10.200 cidr_netmask=24 \
        op monitor interval=30s
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

Looks good – lets check the cluster status.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:03:35 2020
  * Last change:  Wed Sep  2 22:02:53 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::heartbeat:IPaddr2):        Stopped

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=132ms
  * IP_monitor_0 on node2 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:54Z', queued=0ms, exec=120ms
  * IP_monitor_0 on node1 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=110ms

Crap. Linuxism. The ip(8) command is expected to be present in the system. This is FreeBSD and as any UNIX system it comes with ifconfig(8) command instead.

We will have to figure something else. For now we will delete our useless IP service.

root@node1:~ # crm configure delete IP

Status after deletion.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:04:34 2020
  * Last change:  Wed Sep  2 22:04:31 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

Custom Resource

Lets check what resources are available by stock Pacemaker installation.

root@node1:~ # ls -l /usr/local/lib/ocf/resource.d/pacemaker
total 144
-r-xr-xr-x  1 root  wheel   7484 Aug 29 01:22 ClusterMon
-r-xr-xr-x  1 root  wheel   9432 Aug 29 01:22 Dummy
-r-xr-xr-x  1 root  wheel   5256 Aug 29 01:22 HealthCPU
-r-xr-xr-x  1 root  wheel   5342 Aug 29 01:22 HealthIOWait
-r-xr-xr-x  1 root  wheel   9450 Aug 29 01:22 HealthSMART
-r-xr-xr-x  1 root  wheel   6186 Aug 29 01:22 Stateful
-r-xr-xr-x  1 root  wheel  11370 Aug 29 01:22 SysInfo
-r-xr-xr-x  1 root  wheel   5856 Aug 29 01:22 SystemHealth
-r-xr-xr-x  1 root  wheel   7382 Aug 29 01:22 attribute
-r-xr-xr-x  1 root  wheel   7854 Aug 29 01:22 controld
-r-xr-xr-x  1 root  wheel  16134 Aug 29 01:22 ifspeed
-r-xr-xr-x  1 root  wheel  11040 Aug 29 01:22 o2cb
-r-xr-xr-x  1 root  wheel  11696 Aug 29 01:22 ping
-r-xr-xr-x  1 root  wheel   6356 Aug 29 01:22 pingd
-r-xr-xr-x  1 root  wheel   3702 Aug 29 01:22 remote

Not many … we will try to modify the Dummy service into an IP changer on FreeBSD.

root@node1:~ # cp /usr/local/lib/ocf/resource.d/pacemaker/Dummy /usr/local/lib/ocf/resource.d/pacemaker/ifconfig

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
               /* LOTS OF TYPING */

Because of the WordPress blogging system limitations I am forced to post this ifconfig resource as an image … but fear not – the text version is also available here – ifconfig.odt – for download.

Also the first version did not went that well …

root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* rc=3: Your agent has too restrictive permissions: should be 755
-:1: parser error : Start tag expected, '<' not found
usage: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig {start|stop|monitor}
^
* rc=1: Your agent produces meta-data which does not conform to ra-api-1.dtd
* rc=3: Your agent does not support the meta-data action
* rc=3: Your agent does not support the validate-all action
* rc=0: Monitoring a stopped resource should return 7
* rc=0: The initial probe for a stopped resource should return 7 or 5 even if all binaries are missing
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* Your agent does not support the reload action (optional)
Tests failed: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig failed 9 tests

But after adding 755 mode to it and making several (hundred) changes it become usable.

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
             /* LOTS OF NERVOUS TYPING */
root@node1:~ # chmod 755 /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* Your agent does not support the reload action (optional)
/usr/local/lib/ocf/resource.d/pacemaker/ifconfig passed all tests

Looks usable.

The ifconfig resource. Its pretty limited and with hardcoded IP address as for now.

ifconfig

Lets try to add new IP resource to our FreeBSD cluster.

Tests

root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Added.

Lets see what status command now shows.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:44:52 2020
  * Last change:  Wed Sep  2 22:44:44 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:52Z', queued=0ms, exec=5ms
  * IP_monitor_0 on node2 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:53Z', queued=0ms, exec=2ms

Crap. I forgot to copy this new ifconfig resource to the other nodes. Lets fix that now.

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node2:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node2:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node3:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node3:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

Lets stop, delete and re-add our precious resource now.

root@node1:~ # crm resource stop IP
root@node1:~ # crm configure delete IP
root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Fingers crossed.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:45:46 2020
  * Last change:  Wed Sep  2 22:45:43 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Looks like running properly.

Lets verify that its really up where it should be.

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node2:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:80:50:05
        inet 10.0.10.112 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Seems to be working.

Now lets try to move it to the other node in the cluster.

root@node1:~ # crm resource move IP node3
INFO: Move constraint created for IP to node3

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:47:31 2020
  * Last change:  Wed Sep  2 22:47:28 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

Switched properly to node3 system.

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Now we will poweroff the node3 system to check it that IP is really highly available.

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:49:57 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

root@node3:~ # poweroff

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:50:16 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 ]
  * OFFLINE: [ node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Seems that failover went well.

The crm command also colors various sections of its output.

failover

Good to know that Pacemaker and Corosync cluster runs well on FreeBSD.

Some work is needed to write the needed resource files but one with some time and determination can surely put FreeBSD into a very capable highly available cluster.

EOF

Bareos Backup Server on FreeBSD

Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula?

bareos-logo

If you are interested in more enterprise backup solution then check IBM TSM (Spectrum Protect) on Veritas Cluster Server article.

Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages.

I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group).

bareos-sched-levels-256.png

This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that.

The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result.

bareos-sched-cross-256.png

Not bad for my taste.

Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine:

  • bareos-dir
  • bareos-sd
  • bareos-webui
  • bareos-fd

I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares.

To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers.

Also this diagram may be useful for You to get some grip into the Bareos world.

bareos-overview-small

System

As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.

freebsd-nakatomi.jpg

Sorry couldn’t resist πŸ™‚

Here are 3 most important configuration files after some time in vi(1) with them.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
# postgresql_enable=YES
# postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

As You can see all ‘core’ services for Bareos are currently disabled on purpose. We will enable them later.

Parameters and modules to be set at boot.

root@replica:~ # cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=2
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# MODULES
  zfs_load=YES

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

Parameters to be set at runtime.

root@replica:~ # cat /etc/sysctl.conf
# SECURITY
  security.bsd.see_other_uids=0
  security.bsd.see_other_gids=0
  security.bsd.unprivileged_read_msgbuf=0
  security.bsd.unprivileged_proc_debug=0
  security.bsd.stack_guard_page=1
  kern.randompid=9100

# ZFS
  vfs.zfs.min_auto_ashift=12

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# IPC
  kern.ipc.shmall=524288
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

After install we will disable the /zroot mounting.

root@replica:/ # zfs set mountpoint=none zroot

As we have sendmail(8) disabled we will need to take care of its queue.

root@replica:~ # cat > /etc/cron.d/sendmail-clean-clientmqueue << __EOF
# CLEAN SENDMAIL
0 * * * * root /bin/rm -r -f /var/spool/clientmqueue/*
__EOF

Assuming the NFS servers configured in the /etc/hosts file the ‘complete’ /etc/hosts file would look like that.

root@replica:~ # grep '^[^#]' /etc/hosts
::1        localhost localhost.my.domain
127.0.0.1  localhost localhost.my.domain
10.0.10.40 replica.backup.org replica
10.0.10.50 nfs-pri.backup.org nfs-pri
10.0.20.50 nfs-sec.backup.org nfs-sec

Lets verify outside world connectivity – needed for adding the Bareos packages.

root@replica:~ # nc -v bareos.org 443
Connection to bareos.org 443 port [tcp/https] succeeded!
^C
root@replica:~ #

Packages

As we want the latest packages we will modify the /etc/pkg/FreeBSD.conf – the pkg(8) repository file for the latest packages.

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

root@replica:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We will use Bareos packages from pkg(8) as they are available, no need to waste time and power on compilation.

root@replica:~ # pkg search bareos
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
(...)
bareos-bat-16.2.7              Backup archiving recovery open sourced (GUI)
bareos-client-16.2.7           Backup archiving recovery open sourced (client)
bareos-client-static-16.2.7    Backup archiving recovery open sourced (static client)
bareos-docs-16.2.7             Bareos document set (PDF)
bareos-server-16.2.7           Backup archiving recovery open sourced (server)
bareos-traymonitor-16.2.7      Backup archiving recovery open sourced (traymonitor)
bareos-webui-16.2.7            PHP-Frontend to manage Bareos over the web

Now we will install Bareos along with all needed components for its environment.

root@replica:~ # pkg install \
  bareos-client bareos-server bareos-webui postgresql95-server nginx \
  php56 php56-xml php56-session php56-simplexml php56-gd php56-ctype \
  php56-mbstring php56-zlib php56-tokenizer php56-iconv php56-mcrypt \
  php56-pear-DB_ldap php56-zip php56-dom php56-sqlite3 php56-gettext \
  php56-curl php56-json php56-opcache php56-wddx php56-hash php56-soap

The bareos, pgsql and www users have been added by pkg(8) along with their packages.

root@replica:~ # id bareos
uid=997(bareos) gid=997(bareos) groups=997(bareos)

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql)

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www)

PostgreSQL

First we will setup the PostgreSQL database.

We will add separate pgsql login class for PostgreSQL database user.

root@replica:~ # cat >> /etc/login.conf << __EOF
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

__EOF

This is one of the rare occasions when I would appreciate the -p flag from the AIX grep command to display whole paragraph πŸ˜‰

root@replica:~ # grep -B 1 -A 3 pgsql /etc/login.conf
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

Lets reload the login database.

root@replica:~ # cap_mkdb /etc/login.conf

Here are PostgreSQL rc(8) startup script ‘options’ that can be set in /etc/rc.conf file.

root@replica:~ # grep '#  postgresql' /usr/local/etc/rc.d/postgresql
#  postgresql_enable="YES"
#  postgresql_data="/usr/local/pgsql/data"
#  postgresql_flags="-w -s -m fast"
#  postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"
#  postgresql_class="default"
#  postgresql_profiles=""

We only need postgresql_enable and postgresql_class to be set.

We will enable them now in the /etc/rc.conf file.

root@replica:~ # grep -A 10 BAREOS /etc/rc.conf
# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

We will now init the PostgreSQL database for Bareos.

root@replica:~ # /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

… and start it.

root@replica:~ # /usr/local/etc/rc.d/postgresql start
LOG:  ending log output to stderr
HINT:  Future log output will go to log destination "syslog".

We will now take care of the Bareos server configuration. There are a lot *.sample files that we do not need. We also need to take care about permissions.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'
root@replica:~ # find /usr/local/etc/bareos -name \*\.sample -delete

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

For the ‘trace’ of our changes we will keep a copy of the original configuration to track what we have changed in the process of configuring our Bareos environment.

root@replica:~ # cp -a /usr/local/etc/bareos /usr/local/etc/bareos.ORG

Now, we would configure the Bareos Catalog in the /usr/local/etc/bareos.ORG/bareos-dir.d/catalog/MyCatalog.conf file, here are its contents after our modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Lets make sure that pgsql and www users are in the bareos group, to read its configuration files.

root@replica:~ # pw groupmod bareos -m pgsql

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql),997(bareos)

root@replica:~ # pw groupmod bareos -m www

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www),997(bareos)

Now, we will prepare the PostgreSQL database for out Bareos instance. We will use scripts provided by the Bareos package from the /usr/local/lib/bareos/scripts path.

root@replica:~ # su - pgsql

$ whoami
pgsql

$ /usr/local/lib/bareos/scripts/create_bareos_database
Creating postgresql database
CREATE DATABASE
ALTER DATABASE
Database encoding OK
Creating of bareos database succeeded.

$ /usr/local/lib/bareos/scripts/make_bareos_tables
Making postgresql tables
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
DELETE 0
INSERT 0 1
Creation of Bareos PostgreSQL tables succeeded.

$ /usr/local/lib/bareos/scripts/grant_bareos_privileges
Granting postgresql tables
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
Privileges for user bareos granted ON database bareos.

We can now verify that we have the needed database created.

root@replica:~ # su -m bareos -c 'psql -l'
                             List of databases
   Name    | Owner | Encoding  | Collate |    Ctype    | Access privileges 
-----------+-------+-----------+---------+-------------+-------------------
 bareos    | pgsql | SQL_ASCII | C       | C           | 
 postgres  | pgsql | UTF8      | C       | en_US.UTF-8 | 
 template0 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
 template1 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
(4 rows)

We will also add housekeeping script for PostgreSQL database and put it into crontab(1).

root@replica:~ # su - pgsql

$ whoami
pgsql

$ cat > /usr/local/pgsql/vacuum.sh  /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null
__EOF

$ chmod +x /usr/local/pgsql/vacuum.sh

$ cat /usr/local/pgsql/vacuum.sh
#! /bin/sh

/usr/local/bin/vacuumdb -a -z 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null

$ crontab -e

$ exit

root@replica:~ # cat /var/cron/tabs/pgsql
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.Be9j9VVCUa installed on Thu Apr 26 21:45:04 2018)
# (Cron version -- $FreeBSD$)
0 0 * * * /usr/local/pgsql/vacuum.sh

root@replica:~ # su -m pgsql -c 'crontab -l'
0 0 * * * /usr/local/pgsql/vacuum.sh

Storage

I assume that the primary storage would be mounted in the /bareos directory from one NFS server while Disaster Recovery site would be mounted as /bareos-dr from another NFS server. Below is example NFS configuration of these mount points.

root@replica:~ # mkdir /bareos /bareos-dr

root@replica:~ # mount -t nfs
nfs-pri.backup.org:/export/bareos on /bareos (nfs, noatime)
nfs-sec.backup.org:/export/bareos-dr on /bareos-dr (nfs, noatime)

root@replica:~ # cat >> /etc/fstab << __EOF
#DEV                                  #MNT        #FS  #OPTS                                                         #DP
nfs-pri.backup.org:/export/bareos     /bareos     nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
nfs-sec.backup.org:/export/bareos-dr  /bareos-dr  nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
__EOF

root@replica:~ # mkdir -p /bareos/bootstrap
root@replica:~ # mkdir -p /bareos/restore
root@replica:~ # mkdir -p /bareos/storage/FileStorage

root@replica:~ # mkdir -p /bareos-dr/bootstrap
root@replica:~ # mkdir -p /bareos-dr/restore
root@replica:~ # mkdir -p /bareos-dr/storage/FileStorage

root@replica:~ # chown -R bareos:bareos /bareos /bareos-dr

root@replica:~ # find /bareos /bareos-dr -ls | column -t
69194  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:42  /bareos
72239  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/restore
72240  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:42  /bareos/storage
72241  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/storage/FileStorage
72238  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/bootstrap
69195  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:43  /bareos-dr
72254  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:43  /bareos-dr/storage
72255  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:43  /bareos-dr/storage/FileStorage
72253  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/restore
72252  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/bootstrap

Bareos

As we already used BAREOS-DATABASE-PASSWORD for the bareos user on PostgreSQL’s Bareos database we will use these passwords for the remaining parts of the Bareos subsystems. I think that these passwords are self explaining for what Bareos components they are πŸ™‚

  • BAREOS-DATABASE-PASSWORD
  • BAREOS-DIR-PASSWORD
  • BAREOS-SD-PASSWORD
  • BAREOS-FD-PASSWORD
  • BAREOS-MON-PASSWORD
  • ADMIN-PASSWORD

We will now configure all these Bareos subsystems.

We already modified the MyCatalog.conf file, here are its contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Contents of the /usr/local/etc/bareos/bconsole.d/bconsole.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bconsole.d/bconsole.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = replica.backup.org
  address = localhost
  Password = "BAREOS-DIR-PASSWORD"
  Description = "Bareos Console credentials for local Director"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  QueryFile = "/usr/local/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 100
  Password = "BAREOS-DIR-PASSWORD"
  Messages = Daemon
  Auditing = yes

  # Enable the Heartbeat if you experience connection losses
  # (eg. because of your router or firewall configuration).
  # Additionally the Heartbeat can be enabled in bareos-sd and bareos-fd.
  #
  # Heartbeat Interval = 1 min

  # remove comment in next line to load dynamic backends from specified directory
  # Backend Directory = /usr/local/lib

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all director plugins (*-dir.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf
Job {
  Name = "RestoreFiles"
  Description = "Standard Restore."
  Type = Restore
  Client = Default
  FileSet = "SelfTest"
  Storage = File
  Pool = BR-MO
  Messages = Standard
  Where = /bareos/restore
  Accurate = yes
}

New /usr/local/etc/bareos/bareos-dir.d/client/Default.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/Default.conf
Client {
  Name = Default
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

New /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf
Client {
  Name = replica.backup.org
  Description = "Client resource of the Director itself."
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/BackupCatalog.conf
Job {
  Name = "BackupCatalog"
  Description = "Backup the catalog database (after the nightly save)"
  JobDefs = "DefaultJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"

  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl 
  RunBeforeJob = "/usr/local/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"

  # This deletes the copy of the catalog
  RunAfterJob  = "/usr/local/lib/bareos/scripts/delete_catalog_backup"

  # This sends the bootstrap via mail for disaster recovery.
  # Should be sent to another system, please change recipient accordingly
  Write Bootstrap = "|/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \" -s \"Bootstrap for Job %j\" root@localhost" # (#01)
  Priority = 11                   # run after main backup
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Standard.conf
Messages {
  Name = Standard
  Description = "Reasonable message delivery -- send most everything to email address and to the console."
  operatorcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: Intervention needed for %j\" %r"
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: %t %e of %c %l\" %r"
  operator = root@localhost = mount                                 # (#03)
  mail = root@localhost = all, !skipped, !saved, !audit             # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Daemon.conf
Messages {
  Name = Daemon
  Description = "Message delivery for daemon messages (no job)."
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos daemon message\" %r"
  mail = root@localhost = all, !skipped, !audit # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !audit
  append = "/var/log/bareos/bareos-audit.log" = audit
}

Pools

By default Bareos comes with four pools configured, we would not use them so we will delete their configuration files.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/pool
total 14
-rw-rw----  1 bareos  bareos  536 Apr 16 08:14 Differential.conf
-rw-rw----  1 bareos  bareos  512 Apr 16 08:14 Full.conf
-rw-rw----  1 bareos  bareos  534 Apr 16 08:14 Incremental.conf
-rw-rw----  1 bareos  bareos   48 Apr 16 08:14 Scratch.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/pool/*.conf

We will now create two our pools for the DAILY backups and for the MONTHLY backups.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-DAILY-POOL.conf
Pool {
  Name = BR-DA
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 7 days           # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-DA-"             # Volumes will be labeled "BR-DA-"
}

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-MONTHLY-POOL.conf
Pool {
  Name = BR-MO
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 120 days         # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-MO-"             # Volumes will be labeled "BR-MO-"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Differential
  Client = Default
  FileSet = "SelfTest"
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Pool = BR-DA
  Priority = 10
  Write Bootstrap = "/bareos/bootstrap/%c.bsr"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/storage/File.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/storage/File.conf
Storage {
  Name = File
  Address = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Device = FileStorage
  Media Type = File
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf file after modifications.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf
Console {
  Name = bareos-mon
  Description = "Restricted console used by tray-monitor to get the status of the director."
  Password = "BAREOS-MON-PASSWORD"
  CommandACL = status, .status
  JobACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf
FileSet {
  Name = "Catalog"
  Description = "Backup the catalog dump and Bareos configuration files."
  Include {
    Options {
      signature = MD5
      Compression = lzo
    }
    File = "/var/db/bareos"
    File = "/usr/local/etc/bareos"
  }
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf
FileSet {
  Name = "SelfTest"
  Description = "fileset just to backup some files for selftest"
  Include {
    Options {
      Signature   = MD5
      Compression = lzo
    }
    File = "/usr/local/sbin"
  }
}

We do not need bundled LinuxAll.conf and WindowsAllDrives.conf filesets so we will delete them.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/fileset/
total 18
-rw-rw----  1 bareos  bareos  250 Apr 27 02:25 Catalog.conf
-rw-rw----  1 bareos  bareos  765 Apr 16 08:14 LinuxAll.conf
-rw-rw----  1 bareos  bareos  210 Apr 27 02:27 SelfTest.conf
-rw-rw----  1 bareos  bareos  362 Apr 16 08:14 WindowsAllDrives.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/LinuxAll.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/WindowsAllDrives.conf

We will now define two new filesets Windows.conf and UNIX.conf files.

New /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf
FileSet {
  Name = Windows
  Enable VSS = yes
  Include {
    Options {
      Signature = MD5
      Drive Type = fixed
      IgnoreCase = yes
      WildFile = "[A-Z]:/pagefile.sys"
      WildDir  = "[A-Z]:/RECYCLER"
      WildDir  = "[A-Z]:/$RECYCLE.BIN"
      WildDir  = "[A-Z]:/System Volume Information"
      Exclude = yes
      Compression = lzo
    }
    File = /
  }
}

New /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf
FileSet {
  Name = "UNIX"
  Include {
    Options {
      Signature = MD5 # calculate md5 checksum per file
      One FS = No     # change into other filessytems
      FS Type = ufs
      FS Type = btrfs
      FS Type = ext2  # filesystems of given types will be backed up
      FS Type = ext3  # others will be ignored
      FS Type = ext4
      FS Type = reiserfs
      FS Type = jfs
      FS Type = xfs
      FS Type = zfs
      noatime = yes
      Compression = lzo
    }
    File = /
  }
  # Things that usually have to be excluded
  # You have to exclude /tmp
  # on your bareos server
  Exclude {
    File = /var/db/bareos
    File = /tmp
    File = /proc
    File = /sys
    File = /var/tmp
    File = /.journal
    File = /.fsck
  }
}

File below is left unchanged.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/profile/operator.conf
Profile {
   Name = operator
   Description = "Profile allowing normal Bareos operations."

   Command ACL = !.bvfs_clear_cache, !.exit, !.sql
   Command ACL = !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount
   Command ACL = *all*

   Catalog ACL = *all*
   Client ACL = *all*
   FileSet ACL = *all*
   Job ACL = *all*
   Plugin Options ACL = *all*
   Pool ACL = *all*
   Schedule ACL = *all*
   Storage ACL = *all*
   Where ACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all
  Description = "Send all messages to the Director."
}

We will add /bareos/storage/FileStorage path as out FileStorage place for backups.

Contents of the /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf
Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /bareos/storage/FileStorage
  LabelMedia = yes;                   # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf
Storage {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-SD-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Description = "Director, who is permitted to contact this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all, !skipped, !restored
  Description = "Send relevant messages to the Director."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
  Description = "Allow the configured Director to access this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-MON-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/client/myself.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/client/myself.conf
Client {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-fd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""

  # if compatible is set to yes, we are compatible with bacula
  # if set to no, new bareos features are enabled which is the default
  # compatible = yes
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf
Client {
  Name = bareos-fd
  Description = "Client resource of the Director itself."
  Address = localhost
  Password = "BAREOS-FD-PASSWORD"
}

Lets see which files and Bareos components hold which passwords.

root@replica:~ # cd /usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # pwd
/usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # grep -r Password . | sort -k 4 | column -t
./bareos-dir.d/director/bareos-dir.conf:        Password  =  "BAREOS-DIR-PASSWORD"
./bconsole.d/bconsole.conf:                     Password  =  "BAREOS-DIR-PASSWORD"
./bareos-dir.d/client/Default.conf:             Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/bareos-fd.conf:           Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/replica.backup.org.conf:  Password  =  "BAREOS-FD-PASSWORD"
./bareos-fd.d/director/bareos-dir.conf:         Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/console/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-fd.d/director/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-dir.d/storage/File.conf:               Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-dir.conf:         Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-mon.conf:         Password  =  "BAREOS-SD-PASSWORD"

Lets fix the rights after creating all new files.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Bareos WebUI

Now we will add/configure files for the Bareos WebUI interface.

The main Nginx webserver configuration file.

root@replica:~ # cat /usr/local/etc/nginx/nginx.conf
user                 www;
worker_processes     4;
worker_rlimit_nofile 51200;
error_log            /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include           mime.types;
  default_type      application/octet-stream;
  log_format        main '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log        /var/log/nginx/access.log main;
  sendfile          on;
  keepalive_timeout 65;

  server {
    listen       9100;
    server_name  replica.backup.org bareos;
    root         /usr/local/www/bareos-webui/public;

    location / {
      index index.php;
      try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ .php$ {
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_param APPLICATION_ENV production;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
      try_files $uri =404;
    }
  }
}

For the PHP we will modify the bundled config file from package /usr/local/etc/php.ini-production file.

root@replica:~ # cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

root@replica:~ # vi /usr/local/etc/php.ini

We only add the timezone, for my location it is the Europe/Warsaw location.

root@replica:~ # diff -u php.ini-production php.ini
--- php.ini-production  2017-08-12 03:23:36.000000000 +0200
+++ php.ini     2017-09-12 18:50:40.513138000 +0200
@@ -934,6 +934,7 @@
 ; Defines the default timezone used by the date functions
 ; http://php.net/date.timezone
-;date.timezone =
+date.timezone = Europe/Warsaw

 ; http://php.net/date.default-latitude
 ;date.default_latitude = 31.7667

Here is the PHP php-fpm daemon configuration.

root@replica:~ # cat /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
log_level = notice

[www]
user = www
group = www
listen = 127.0.0.1:9000
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = static
pm.max_children = 4
pm.start_servers = 1
pm.min_spare_servers = 0
pm.max_spare_servers = 4
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Rest of the Bareos WebUI configuration.

New /usr/local/etc/bareos/bareos-dir.d/console/admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
Console {
  Name = admin
  Password = ADMIN-PASSWORD
  Profile = webui-admin
}

New /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf
Profile {
  Name = webui-admin
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
  Plugin Options ACL = *all*
}

You may add other directors here as well.

Modified /usr/local/etc/bareos-webui/directors.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/directors.ini
;------------------------------------------------------------------------------
; Section localhost-dir
;------------------------------------------------------------------------------
[replica.backup.org]
enabled = "yes"
diraddress = "replica.backup.org"
dirport = 9101
catalog = "MyCatalog"

Modified /usr/local/etc/bareos-webui/configuration.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/configuration.ini
;------------------------------------------------------------------------------
; SESSION SETTINGS
;------------------------------------------------------------------------------
[session]
timeout=3600

;------------------------------------------------------------------------------
; DASHBOARD SETTINGS
;------------------------------------------------------------------------------
[dashboard]
autorefresh_interval=60000

;------------------------------------------------------------------------------
; TABLE SETTINGS
;------------------------------------------------------------------------------
[tables]
pagination_values=10,25,50,100
pagination_default_value=25
save_previous_state=false

;------------------------------------------------------------------------------
; VARIOUS SETTINGS
;------------------------------------------------------------------------------
[autochanger]
labelpooltype=scratch

Last but not least, we need to set permissions for Bareos WebUI configuration files.

root@replica:~ # chown -R www:www /usr/local/etc/bareos-webui
root@replica:~ # chown -R www:www /usr/local/www/bareos-webui

Logs

Lets create the needed log files and fix their permissions.

root@replica:~ # chown -R bareos:bareos /var/log/bareos
root@replica:~ # :>               /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/nginx

We will now add rules to the newsyslog(8) log rotate daemon, we do not want our filesystem to fill up don’t we?

As newsyslog does cover the *.conf.d directories we will use them instead of modifying the main /etc/newsyslog.conf configuration file.

root@replica:~ # grep conf\\.d /etc/newsyslog.conf
 /etc/newsyslog.conf.d/*
 /usr/local/etc/newsyslog.conf.d/*

root@replica:~ # mkdir -p /usr/local/etc/newsyslog.conf.d

root@replica:~ # cat > /usr/local/etc/newsyslog.conf.d/bareos << __EOF
# BAREOS
/var/log/php-fpm.log             www:www       640  7     100    @T00  J
/var/log/nginx/access.log        www:www       640  7     100    @T00  J
/var/log/nginx/error.log         www:www       640  7     100    @T00  J
/var/log/bareos/bareos.log       bareos:bareos 640  7     100    @T00  J
/var/log/bareos/bareos-audit.log bareos:bareos 640  7     100    @T00  J
__EOF

Lets verify that newsyslog(8) understands out configuration.

root@replica:~ # newsyslog -v | tail -5
/var/log/php-fpm.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/access.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/error.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos-audit.log : --> will trim at Tue May  1 00:00:00 2018

Skel

We now need to create so called Bareos skel files for the rc(8) script to gather all the configuration in one file.

If we do not do that the Bareos services would not stop and we will see an error like that one below.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd onestart
Starting bareos_sd.
27-Apr 02:59 bareos-sd JobId 0: Error: parse_conf.c:580 Failed to read config file "/usr/local/etc/bareos/bareos-sd.conf"
bareos-sd ERROR TERMINATION
parse_conf.c:148 Failed to find config filename.
/usr/local/etc/rc.d/bareos-sd: WARNING: failed to start bareos_sd

Lets create them then …

root@replica:~ # cat > /usr/local/etc/bareos/bareos-dir.conf << __EOF
 @/usr/local/etc/bareos/bareos-dir.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-fd.conf << __EOF
@/usr/local/etc/bareos/bareos-fd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-sd.conf << __EOF
@/usr/local/etc/bareos/bareos-sd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bconsole.conf << __EOF
@/usr/local/etc/bareos/bconsole.d/*
__EOF

… and verify their contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.conf
@/usr/local/etc/bareos/bareos-dir.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.conf
@/usr/local/etc/bareos/bareos-fd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.conf
@/usr/local/etc/bareos/bareos-sd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bconsole.conf
@/usr/local/etc/bareos/bconsole.d/*

After all our modification and added files lefs make sure that /usr/local/etc/bareos dir permissions are properly set.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Its Alive!

Back to our system settings, we will add service start to the main FreeBSD /etc/rc.conf file.

After the modifications our final /etc/rc.conf file will look as follows.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
  bareos_dir_enable=YES
  bareos_sd_enable=YES
  bareos_fd_enable=YES
  php_fpm_enable=YES
  nginx_enable=YES

As PostgreSQL server is already running …

root@replica:~ 	# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 15205)
/usr/local/bin/postgres "-D" "/usr/local/pgsql/data"

… we will now start rest of our Bareos stack services.

First the PHP php-fpm daemon.

root@replica:~ # /usr/local/etc/rc.d/php-fpm start
Performing sanity check on php-fpm configuration:
[27-Apr-2018 02:57:09] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

The Nginx webserver.

root@replica:~ # /usr/local/etc/rc.d/nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Bareos Storage Daemon.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd start
Starting bareos_sd.

Bareos File Daemon also known as Bareos client.

root@replica:~ # /usr/local/etc/rc.d/bareos-fd start
Starting bareos_fd.

… and last but least, the most important daemon of this guide, the Bareos Director.

root@replica:~ # /usr/local/etc/rc.d/bareos-dir start
Starting bareos_dir.

We may now see on what ports our daemons are listening.

root@replica:~ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
bareos   bareos-dir 89823 4  tcp4   *:9101                *:*
root     bareos-fd  73066 3  tcp4   *:9102                *:*
www      nginx      33857 6  tcp4   *:9100                *:*
www      nginx      28675 6  tcp4   *:9100                *:*
www      nginx      20960 6  tcp4   *:9100                *:*
www      nginx      15881 6  tcp4   *:9100                *:*
root     nginx      14388 6  tcp4   *:9100                *:*
www      php-fpm    84047 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    82285 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    80688 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    74735 0  tcp4   127.0.0.1:9000        *:*
root     php-fpm    70518 8  tcp4   127.0.0.1:9000        *:*
bareos   bareos-sd  5151  3  tcp4   *:9103                *:*
pgsql    postgres   20009 4  tcp4   127.0.0.1:5432        *:*
root     sshd       49253 4  tcp4   *:22                  *:*

In case You wandered in what order these services will start, below is the answer from rc(8) subsystem.

root@replica:~ # rcorder /etc/rc.d/* /usr/local/etc/rc.d/* | grep -E '(bareos|php-fpm|nginx|postgresql)'
/usr/local/etc/rc.d/postgresql
/usr/local/etc/rc.d/php-fpm
/usr/local/etc/rc.d/nginx
/usr/local/etc/rc.d/bareos-sd
/usr/local/etc/rc.d/bareos-fd
/usr/local/etc/rc.d/bareos-dir

We can now access http://replica.backup.org:9100 in our browser.

bareos-webui-01

Its indeed alive, we can now login with admin user and ADMIN-PASSWORD password.

bareos-webui-02-dashboard

As we logged in we see empty Bareos dashboard.

Jobs

Now, to make life easier I have prepared two scripts for adding clients to the Bareos server.

The BRONZE-job.sh and BRONZE-sched.sh for generate Bareos files for new jobs and schedules. We will put them into /root/bin dir for convenience.

root@replica:~ # mkdir /root/bin

Both scripts are available below:

After downloading them please rename them accordingly (WordPress limitation).

root@replica:~ # mv BRONZE-sched.sh.key BRONZE-sched.sh
root@replica:~ # mv BRONZE-job.sh.key   BRONZE-job.sh

Lets make them executable.

root@replica:~ # chmod +x /root/bin/BRONZE-sched.sh
root@replica:~ # chmod +x /root/bin/BRONZE-job.sh

Below is ‘help’ message for each of them.

root@replica:~ # /root/bin/BRONZE-sched.sh 
usage: BRONZE-sched.sh GROUP TIME

example:
  BRONZE-sched.sh 01 21:00
root@replica:~ # /root/bin/BRONZE-job.sh
usage: BRONZE-job.sh GROUP TIME CLIENT TYPE

  GROUP option: 01 | 02 | 03
   TIME option: 00:00 - 23:59
 CLIENT option: FQDN
   TYPE option: UNIX | Windows

example:
  BRONZE-job.sh 01 21:00 CLIENT.domain.com UNIX

Client

For the first client we will use the replica.backup.org client – the server itself.

First use the BRONZE-sched.sh to create new scheduler configuration. The script will echo names of the files it created.

root@replica:~ # /root/bin/BRONZE-sched.sh 01 21:00
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-DAILY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-MONTHLY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

We will not use Windows backups for that client in that schedule so we can remove them.

root@replica:~ # rm -f \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

Then use the BRONZE-job.sh to add client and its type to created earlier schedule. Names of the created files will also be echoed to stdout.

root@replica:~ # /root/bin/BRONZE-job.sh 01 21:00 replica.backup.org UNIX
INFO: client DNS check.
INFO: DNS 'A' RECORD: Host replica.backup.org not found: 3(NXDOMAIN)
INFO: DNS 'PTR' RECORD: Host 3\(NXDOMAIN\) not found: 3(NXDOMAIN)
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-DAILY-01-2100-replica.backup.org.conf
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-MONTHLY-01-2100-replica.backup.org.conf

Now we need to reload the Bareos server configuration.

root@replica:~ # echo reload | bconsole
Connecting to Director localhost:9101
1000 OK: replica.backup.org Version: 16.2.7 (09 October 2017)
Enter a period to cancel a command.
reload
reloaded

Lets see how it looks in the browser. We will run that job, then cancel it and then rerun it again.

bareos-webui-03-clients

Client replica.backup.org is configured.

Lets go to Jobs tab to start its backup Job.

bareos-webui-04-jobs

Message that backup Job has started.

bareos-webui-05

We can see it in running state on Jobs tab.

bareos-webui-06

… and on the Dashboard.

bareos-webui-07

We can also display its messages by clicking on its number.

bareos-webui-08

The Jobs tab after cancelling the first Job and starting it again till completion.

bareos-webui-09

… and the Dashboard after these activities.

bareos-webui-10-dashboard

Restore

Lets restore some data, in Bareos its a breeze as its accessed directly in the browser on the Restore tab.

bareos-webui-11-restore

The Restore Job has started.

bareos-webui-12

The Dashboard after restoration.

bareos-webui-13-dashboard

… and Volumes with our precious data.

bareos-webui-14-volumes

Contents of a Volume.

bareos-webui-15-volumes-backups

Status of our Bareos Director.

bareos-webui-16

… and Director Messages, an equivalent of query actlog from IBM TSM or as they call it recently – IBM Spectrum Protect.

bareos-webui-17-messages

… and Bareos Console (bconsole) directly in the browser. Masterpiece!

bareos-webui-18-console

Confirmation about the restored file.

root@replica:~ # ls -l /tmp/bareos-restores/COPYRIGHT 
-r--r--r--  1 root  wheel  6199 Jul 21  2017 /tmp/bareos-restores/COPYRIGHT

root@replica:~ # sha256 /tmp/bareos-restores/COPYRIGHT /COPYRIGHT | column -t
SHA256  (/tmp/bareos-restores/COPYRIGHT)  =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0
SHA256  (/COPYRIGHT)                      =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0

Remote Replica

We have volumes with backup in the /bareos directory, we will now configure rsync(1) to replicate these backups to the /bareos-dr directory, to NFS server in other location.

root@replica:~ # pkg install rsync

The rsync(1) command will look like that.


/usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

We will put that command into the crontab(1) root job.

root@replica:~ # crontab -e

root@replica:~ # crontab -l
0 7 * * * /usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

As all backups have finished before 7:00, the end of backup window, we will start replication by then.

Summary

So we have a configured ready to make backups and restore Bareos Backup Server on a FreeBSD operating system. It can be used as an Appliance on any virtualization platform or also on a physical server with local storage resources without NFS shares.

UPDATE 1 – Die Hard Tribute in 9.2-RC3 Loader

The FreeBSD Developers even made a tribute to the Die Hard movie and actually implemented the Nakatomi Socrates screen in the FreeBSD 9.2-RC3 loader as shown on the images below. Unfortunately it has been removed in later FreeBSD 9.2-RC4 and official FreeBSD 9.2-RELEASE versions.

freebsd-9.2-nakatomi-socrates-01

freebsd-9.2-nakatomi-socrates-02

UPDATE 2

The Bareos Backup Server on FreeBSD article was featured in the BSD Now 254 – Bare the OS episode.

Thanks for mentioning!

UPDATE 3 – Additional Permissions

Thanks to Math user who identified the problem I added this paragraph below in proper place to make the HOWTO complete. Without it many Bareos daemons would not start with permissions error.

Here is the added paragraph.

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

Β 

EOF