Tag Archives: tsm

IBM TSM (Spectrum Protect) on Veritas Cluster Server

Until today I mostly shared articles about free and open systems. Now its time to share so called enterprise experience πŸ™‚ Not so long ago I made a IBM TSM instance as highly available service on Symantec Veritas Cluster Server.

ibm-tsm-logo.png

If you prefer to use open and free backup solution then check Bareos Backup Server on FreeBSD article.

The IBM TSM (Tivoli Storage Manager) has been rebranded by IBM into IBM Spectrum Protect and in the similar period of time Symantec moved Veritas Cluster Server info InfoScale Availability while creating separate/dedicated Veritas company for this purpose.

The instructions I want to share today are for sure the same for latest versions of Veritas Cluster Server and its later InfoScale Availability incarnations and latest IBM Spectrum Protect 8.1 family introduction was mostly related to rebranding/cleaning of the whole Spectrum Protect/TSM modules and additions, so they all will have common 8.1 label. As these instructions were made for IBM TSM (Spectrum Protect) 7.1.6 version they should still be very similar for current versions.

This highly available IBM TSM instance is part of the whole Backup Consolidation project which uses two physical servers to server both this IBM TSM service and Dell/EMC Networker backup server. When everything is OK then one of the nodes is dedicated to IBM TSM and the other one is used by Dell/EMC Networker, so all physical resources are well saturated and we do not ‘waste’ whole node to wait for 99% of the time empty for the first node to crash. Of course if first node misbehaves or has a hardware failure, then both IBM TSM and Dell/EMC Networker run nicely on single node. It is also very convenient for various maintenance tasks, to be able to switch all services to other node and and work in peace on the first one, but I do not have to tell you that. The third and last service is shared between these two Oracle RMAN Catalog for the Oracle databases metadata information – also for backup/restore purposes.

I will not write here instructions to install the operating system (we use amd64 RHEL 6.x here) or to setup the Veritas Cluster Server as I installed it earlier and its quite simple to set it up. These instructions focus on creating IBM TSM highly available service along using/allocating the resources from the IBM Storwize V5030 storage array where 400 GB SSD disks are dedicated for IBM TSM DB2 database instance and 1.8 TB 10K SAS disks are dedicated for DRAID groups that will be serving space for IBM TSM storage pools implemented in latest IBM TSM container pools with deduplication and compression enabled. The head of IBM Storwize V5030 storage array is shown below.

ibm-tsm-v5030-photo.jpg

Each node is IBM System x3650 M4 server with two dual-port 8Gb FC cards and one dual-port 10GE cards … along with builtin 1GE cards for Veritas Cluster Server heartbeats. Each has 192 GB RAM and dual 6-core CPUs @ 3.5 GHz each which translates to 12 physical cores or 24 HTT threads per node. The three internal SSD drives are used for the system only in RAID1 + SPARE configuration. All clustered resources are from IBM Storwize V5030 FC/SAN storage array. The operating system installed on these nodes is amd64 RHEL 6.x and the Veritas Cluster Server is at 6.2.x version. The IBM System x3650 M4 server is shown below.

ibm-tsm-x3650-m4.jpg

All of the setting/tuning/decisions were made based on the IBM TSM documentation and great IBM Spectrum Protect Blueprints resources from the valuable IBM developerWorks wiki.

Storage Array Setup

First we need to create MDISKS. We used DRAID with double parity protection + spare for each MDISK with 17 SAS 1.8 TB 10K disks each. That gives 14 disks for data 2 for parity and 1 spare from which all provide I/O thanks to DRAID setup. We have three such MDISKs with ~21.7 TB each for the total 65.1 TB for IBM TSM containers. Of course all these 3 ‘pool’ MDISKs are in one Storage Group. The LUNs for the IBM TSM DB2 database were 5 SSD 400 GB disks setup in a DRAID disk with 1 parity and 1 spare disk. This gives 3 disks for data 1 for parity and 1 for spare space. This gives about 1.1 TB for the IBM TSM DB2 database.

Here are LUNs created from these MDISKs.

ibm-tsm-v5030.png

I needed to remove some names of course πŸ™‚

LUNs Initialization

Veritas Service Cluster needs to have storage prepared with disk groups which are similar in concept (but more powerful) then LVM. Below are instructions to first detect and then initialize these LUNs from IBM Storwize V5030 storage array. I marked them in blue for more clarity.

[root@300 ~]# haconf -makerw
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:none       -            -            online invalid
storwizev70000_00001b auto:none       -            -            online invalid
storwizev70000_00001c auto:none       -            -            online invalid
storwizev70000_00001d auto:none       -            -            online invalid
storwizev70000_00001e auto:none       -            -            online invalid
storwizev70000_00001f auto:none       -            -            online invalid
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:none       -            -            online invalid
storwizev70000_000013 auto:none       -            -            online invalid
storwizev70000_000014 auto:none       -            -            online invalid
storwizev70000_000015 auto:none       -            -            online invalid
storwizev70000_000016 auto:none       -            -            online invalid
storwizev70000_000017 auto:none       -            -            online invalid
storwizev70000_000018 auto:none       -            -            online invalid
storwizev70000_000019 auto:none       -            -            online invalid
storwizev70000_000020 auto:none       -            -            online invalid
[root@300 ~]# vxdisksetup -i storwizev70000_00001a
[root@300 ~]# vxdisksetup -i storwizev70000_00001b
[root@300 ~]# vxdisksetup -i storwizev70000_00001c
[root@300 ~]# vxdisksetup -i storwizev70000_00001d
[root@300 ~]# vxdisksetup -i storwizev70000_00001e
[root@300 ~]# vxdisksetup -i storwizev70000_00001f
[root@300 ~]# vxdisksetup -i storwizev70000_000012
[root@300 ~]# vxdisksetup -i storwizev70000_000013
[root@300 ~]# vxdisksetup -i storwizev70000_000014
[root@300 ~]# vxdisksetup -i storwizev70000_000015
[root@300 ~]# vxdisksetup -i storwizev70000_000016
[root@300 ~]# vxdisksetup -i storwizev70000_000017
[root@300 ~]# vxdisksetup -i storwizev70000_000018
[root@300 ~]# vxdisksetup -i storwizev70000_000019
[root@300 ~]# vxdisksetup -i storwizev70000_000020
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    -            -            online
storwizev70000_00001b auto:cdsdisk    -            -            online
storwizev70000_00001c auto:cdsdisk    -            -            online
storwizev70000_00001d auto:cdsdisk    -            -            online
storwizev70000_00001e auto:cdsdisk    -            -            online
storwizev70000_00001f auto:cdsdisk    -            -            online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    -            -            online
storwizev70000_000013 auto:cdsdisk    -            -            online
storwizev70000_000014 auto:cdsdisk    -            -            online
storwizev70000_000015 auto:cdsdisk    -            -            online
storwizev70000_000016 auto:cdsdisk    -            -            online
storwizev70000_000017 auto:cdsdisk    -            -            online
storwizev70000_000018 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000020 auto:cdsdisk    -            -            online
[root@300 ~]# vxdg init TSM0_dg \
                stgFC_020=storwizev70000_000020 \
                stgFC_012=storwizev70000_000012 \
                stgFC_016=storwizev70000_000016 \
                stgFC_013=storwizev70000_000013 \
                stgFC_014=storwizev70000_000014 \
                stgFC_015=storwizev70000_000015 \
                stgFC_017=storwizev70000_000017 \
                stgFC_018=storwizev70000_000018 \
                stgFC_019=storwizev70000_000019 \
                stgFC_01A=storwizev70000_00001a \
                stgFC_01B=storwizev70000_00001b \
                stgFC_01C=storwizev70000_00001c \
                stgFC_01D=storwizev70000_00001d \
                stgFC_01E=storwizev70000_00001e \
                stgFC_01F=storwizev70000_00001f
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    stgFC_01A    TSM0_dg      online
storwizev70000_00001b auto:cdsdisk    stgFC_01B    TSM0_dg      online
storwizev70000_00001c auto:cdsdisk    stgFC_01C    TSM0_dg      online
storwizev70000_00001d auto:cdsdisk    stgFC_01D    TSM0_dg      online
storwizev70000_00001e auto:cdsdisk    stgFC_01E    TSM0_dg      online
storwizev70000_00001f auto:cdsdisk    stgFC_01F    TSM0_dg      online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    stgFC_012    TSM0_dg      online
storwizev70000_000013 auto:cdsdisk    stgFC_013    TSM0_dg      online
storwizev70000_000014 auto:cdsdisk    stgFC_014    TSM0_dg      online
storwizev70000_000015 auto:cdsdisk    stgFC_015    TSM0_dg      online
storwizev70000_000016 auto:cdsdisk    stgFC_016    TSM0_dg      online
storwizev70000_000017 auto:cdsdisk    stgFC_017    TSM0_dg      online
storwizev70000_000018 auto:cdsdisk    stgFC_018    TSM0_dg      online
storwizev70000_000019 auto:cdsdisk    stgFC_019    TSM0_dg      online
storwizev70000_000020 auto:cdsdisk    stgFC_020    TSM0_dg      online
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_instance     maxsize=32G   stgFC_020
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_active_log   maxsize=128G  stgFC_012
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_archive_log  maxsize=384G  stgFC_016
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_01        maxsize=300G  stgFC_013
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_02        maxsize=300G  stgFC_014
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_03        maxsize=300G  stgFC_015
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_01 maxsize=900G  stgFC_017
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_02 maxsize=900G  stgFC_018
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_03 maxsize=900G  stgFC_019
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_01     maxsize=6700G stgFC_01A
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_02     maxsize=6700G stgFC_01B
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_03     maxsize=6700G stgFC_01C
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_04     maxsize=6700G stgFC_01D
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_05     maxsize=6700G stgFC_01E
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_06     maxsize=6700G stgFC_01F
[root@300 ~]# vxprint -u h | grep ^sd | column -t
sd  stgFC_00B-01  NSR_vol_index-02          ENABLED  399.95g  0.00  -  -  -
sd  stgFC_00C-01  NSR_vol_media-02          ENABLED  9.96g    0.00  -  -  -
sd  stgFC_00D-01  NSR_vol_nsr-02            ENABLED  79.96g   0.00  -  -  -
sd  stgFC_00E-01  NSR_vol_res-02            ENABLED  9.96g    0.00  -  -  -
sd  stgFC_012-01  TSM0_vol_active_log-01    ENABLED  127.96g  0.00  -  -  -
sd  stgFC_016-01  TSM0_vol_archive_log-01   ENABLED  383.95g  0.00  -  -  -
sd  stgFC_017-01  TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_018-01  TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_019-01  TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_013-01  TSM0_vol_db_01-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_014-01  TSM0_vol_db_02-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_015-01  TSM0_vol_db_03-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_020-01  TSM0_vol_instance-01      ENABLED  31.96g   0.00  -  -  -
sd  stgFC_01A-01  TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01B-01  TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01C-01  TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01D-01  TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01E-01  TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01F-01  TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00  -  -  -
[root@300 ~]# vxprint -u h -g TSM0_dg | column -t
TY  NAME                      ASSOC                     KSTATE   LENGTH   PLOFFS  STATE   TUTIL0  PUTIL0
dg  TSM0_dg                   TSM0_dg                   -        -        -       -       -       -
dm  stgFC_01A                 storwizev70000_00001a     -        6.54t    -       -       -       -
dm  stgFC_01B                 storwizev70000_00001b     -        6.54t    -       -       -       -
dm  stgFC_01C                 storwizev70000_00001c     -        6.54t    -       -       -       -
dm  stgFC_01D                 storwizev70000_00001d     -        6.54t    -       -       -       -
dm  stgFC_01E                 storwizev70000_00001e     -        6.54t    -       -       -       -
dm  stgFC_01F                 storwizev70000_00001f     -        6.54t    -       -       -       -
dm  stgFC_012                 storwizev70000_000012     -        127.96g  -       -       -       -
dm  stgFC_013                 storwizev70000_000013     -        299.95g  -       -       -       -
dm  stgFC_014                 storwizev70000_000014     -        299.95g  -       -       -       -
dm  stgFC_015                 storwizev70000_000015     -        299.95g  -       -       -       -
dm  stgFC_016                 storwizev70000_000016     -        383.95g  -       -       -       -
dm  stgFC_017                 storwizev70000_000017     -        899.93g  -       -       -       -
dm  stgFC_018                 storwizev70000_000018     -        899.93g  -       -       -       -
dm  stgFC_019                 storwizev70000_000019     -        899.93g  -       -       -       -
dm  stgFC_020                 storwizev70000_000020     -        31.96g   -       -       -       -

v   TSM0_vol_active_log       fsgen                     ENABLED  127.96g  -       ACTIVE  -       -
pl  TSM0_vol_active_log-01    TSM0_vol_active_log       ENABLED  127.96g  -       ACTIVE  -       -
sd  stgFC_012-01              TSM0_vol_active_log-01    ENABLED  127.96g  0.00    -       -       -

v   TSM0_vol_archive_log      fsgen                     ENABLED  383.95g  -       ACTIVE  -       -
pl  TSM0_vol_archive_log-01   TSM0_vol_archive_log      ENABLED  383.95g  -       ACTIVE  -       -
sd  stgFC_016-01              TSM0_vol_archive_log-01   ENABLED  383.95g  0.00    -       -       -

v   TSM0_vol_db_backup_01     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_01-01  TSM0_vol_db_backup_01     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_017-01              TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_02     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_02-01  TSM0_vol_db_backup_02     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_018-01              TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_03     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_03-01  TSM0_vol_db_backup_03     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_019-01              TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_01            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_01-01         TSM0_vol_db_01            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_013-01              TSM0_vol_db_01-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_02            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_02-01         TSM0_vol_db_02            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_014-01              TSM0_vol_db_02-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_03            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_03-01         TSM0_vol_db_03            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_015-01              TSM0_vol_db_03-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_instance         fsgen                     ENABLED  31.96g   -       ACTIVE  -       -
pl  TSM0_vol_instance-01      TSM0_vol_instance         ENABLED  31.96g   -       ACTIVE  -       -
sd  stgFC_020-01              TSM0_vol_instance-01      ENABLED  31.96g   0.00    -       -       -

v   TSM0_vol_pool0_01         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_01-01      TSM0_vol_pool0_01         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01A-01              TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_02         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_02-01      TSM0_vol_pool0_02         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01B-01              TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_03         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_03-01      TSM0_vol_pool0_03         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01C-01              TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_04         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_04-01      TSM0_vol_pool0_04         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01D-01              TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_05         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_05-01      TSM0_vol_pool0_05         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01E-01              TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_06         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_06-01      TSM0_vol_pool0_06         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01F-01              TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00    -       -       -
[root@300 ~]# vxinfo -p -g TSM0_dg | column -t
vol   TSM0_vol_instance         fsgen   Started
plex  TSM0_vol_instance-01      ACTIVE
vol   TSM0_vol_active_log       fsgen   Started
plex  TSM0_vol_active_log-01    ACTIVE
vol   TSM0_vol_archive_log      fsgen   Started
plex  TSM0_vol_archive_log-01   ACTIVE
vol   TSM0_vol_db_01            fsgen   Started
plex  TSM0_vol_db_01-01         ACTIVE
vol   TSM0_vol_db_02            fsgen   Started
plex  TSM0_vol_db_02-01         ACTIVE
vol   TSM0_vol_db_03            fsgen   Started
plex  TSM0_vol_db_03-01         ACTIVE
vol   TSM0_vol_db_backup_01     fsgen   Started
plex  TSM0_vol_db_backup_01-01  ACTIVE
vol   TSM0_vol_db_backup_02     fsgen   Started
plex  TSM0_vol_db_backup_02-01  ACTIVE
vol   TSM0_vol_db_backup_03     fsgen   Started
plex  TSM0_vol_db_backup_03-01  ACTIVE
vol   TSM0_vol_pool0_01         fsgen   Started
plex  TSM0_vol_pool0_01-01      ACTIVE
vol   TSM0_vol_pool0_02         fsgen   Started
plex  TSM0_vol_pool0_02-01      ACTIVE
vol   TSM0_vol_pool0_03         fsgen   Started
plex  TSM0_vol_pool0_03-01      ACTIVE
vol   TSM0_vol_pool0_04         fsgen   Started
plex  TSM0_vol_pool0_04-01      ACTIVE
vol   TSM0_vol_pool0_05         fsgen   Started
plex  TSM0_vol_pool0_05-01      ACTIVE
vol   TSM0_vol_pool0_06         fsgen   Started
plex  TSM0_vol_pool0_06-01      ACTIVE
[root@300 ~]# find /dev/vx/dsk -name TSM0_\*
/dev/vx/dsk/TSM0_dg
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_06     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_05     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_04     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_03     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_02     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_01     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_03        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_02        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_01        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_archive_log  &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_active_log   &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_instance     &

[root@300 ~]# haconf -dump -makero

Veritas Cluster Server Group

Now as we have LUNs initialized into Disk Group we may now create the cluster service.

[root@300 ~]# haconf -makerw
[root@300 ~]# hagrp -add TSM0_site
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
[root@300 ~]# hagrp -modify TSM0_site SystemList 300 0 301 1
[root@300 ~]# hagrp -modify TSM0_site AutoStartList 300 301
[root@300 ~]# hagrp -modify TSM0_site Parallel 0
[root@300 ~]# hares -add    TSM0_nic_bond0 NIC TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_nic_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_nic_bond0 PingOptimize 1
[root@300 ~]# hares -modify TSM0_nic_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_nic_bond0 Enabled 1
[root@300 ~]# hares -probe  TSM0_nic_bond0 -sys 301
[root@300 ~]# hares -add    TSM0_ip_bond0 IP TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_ip_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_ip_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_ip_bond0 Address 10.20.30.44
[root@300 ~]# hares -modify TSM0_ip_bond0 NetMask 255.255.255.0
[root@300 ~]# hares -modify TSM0_ip_bond0 Enabled 1
[root@300 ~]# hares -link   TSM0_ip_bond0 TSM0_nic_bond0
[root@300 ~]# hares -add    TSM0_dg DiskGroup TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_dg Critical 1
[root@300 ~]# hares -modify TSM0_dg DiskGroup TSM0_dg
[root@300 ~]# hares -modify TSM0_dg Enabled 1
[root@300 ~]# hares -probe  TSM0_dg -sys 301
[root@300 ~]# mkdir /tsm0
[root@301 ~]# mkdir /tsm0

I did not wanted to type all these over and over again so I generated these commands as shown below.

[LOCAL] % cat > LIST << __EOF
stgFC_020    32  /tsm0                         TSM0_vol_instance      TSM0_mnt_instance
stgFC_012   128  /tsm0/active_log              TSM0_vol_active_log    TSM0_mnt_active_log
stgFC_016   384  /tsm0/archive_log             TSM0_vol_archive_log   TSM0_mnt_archive_log
stgFC_013   300  /tsm0/db/db_01                TSM0_vol_db_01         TSM0_mnt_db_01
stgFC_014   300  /tsm0/db/db_02                TSM0_vol_db_02         TSM0_mnt_db_02
stgFC_015   300  /tsm0/db/db_03                TSM0_vol_db_03         TSM0_mnt_db_03
stgFC_017   900  /tsm0/db_backup/db_backup_01  TSM0_vol_db_backup_01  TSM0_mnt_db_backup_01
stgFC_018   900  /tsm0/db_backup/db_backup_02  TSM0_vol_db_backup_02  TSM0_mnt_db_backup_02
stgFC_019   900  /tsm0/db_backup/db_backup_03  TSM0_vol_db_backup_03  TSM0_mnt_db_backup_03
stgFC_01A  6700  /tsm0/pool0/pool0_01          TSM0_vol_pool0_01      TSM0_mnt_pool0_01
stgFC_01B  6700  /tsm0/pool0/pool0_02          TSM0_vol_pool0_02      TSM0_mnt_pool0_02
stgFC_01C  6700  /tsm0/pool0/pool0_03          TSM0_vol_pool0_03      TSM0_mnt_pool0_03
stgFC_01D  6700  /tsm0/pool0/pool0_04          TSM0_vol_pool0_04      TSM0_mnt_pool0_04
stgFC_01E  6700  /tsm0/pool0/pool0_05          TSM0_vol_pool0_05      TSM0_mnt_pool0_05
stgFC_01F  6700  /tsm0/pool0/pool0_06          TSM0_vol_pool0_06      TSM0_mnt_pool0_06
__EOF
[LOCAL]# cat LIST \
  | while read STG SIZE MNTPOINT VOL MNTNAME
    do
      echo sleep 0.2; echo hares -add    ${MNTNAME} Mount TSM0_site
      echo sleep 0.2; echo hares -modify ${MNTNAME} Critical 1
      echo sleep 0.2; echo hares -modify ${MNTNAME} SnapUmount 0
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountPoint ${MNTPOINT}
      echo sleep 0.2; echo hares -modify ${MNTNAME} BlockDevice /dev/vx/dsk/TSM0_dg/${VOL}
      echo sleep 0.2; echo hares -modify ${MNTNAME} FSType vxfs
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountOpt largefiles
      echo sleep 0.2; echo hares -modify ${MNTNAME} FsckOpt %-y
      echo sleep 0.2; echo hares -modify ${MNTNAME} Enabled 1
      echo sleep 0.2; echo hares -probe  ${MNTNAME} -sys 301
      echo sleep 0.2; echo hares -link   ${MNTNAME} TSM0_dg
      echo
    done
[root@300 ~]# hares -add    TSM0_mnt_instance Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_instance Critical 1
[root@300 ~]# hares -modify TSM0_mnt_instance SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_instance MountPoint /tsm0
[root@300 ~]# hares -modify TSM0_mnt_instance BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# hares -modify TSM0_mnt_instance FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_instance MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_instance FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_instance Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_instance -sys 301
[root@300 ~]# hares -link   TSM0_mnt_instance TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_active_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_active_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_active_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_active_log MountPoint /tsm0/active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_active_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_active_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_active_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_active_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_active_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_archive_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_archive_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_archive_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountPoint /tsm0/archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_archive_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_archive_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_archive_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_archive_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountPoint /tsm0/db/db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountPoint /tsm0/db/db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountPoint /tsm0/db/db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountPoint /tsm0/db_backup/db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountPoint /tsm0/db_backup/db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountPoint /tsm0/db_backup/db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountPoint /tsm0/pool0/pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountPoint /tsm0/pool0/pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountPoint /tsm0/pool0/pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_04 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountPoint /tsm0/pool0/pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_04 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_04 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_05 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountPoint /tsm0/pool0/pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_05 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_05 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_06 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountPoint /tsm0/pool0/pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_06 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_06 TSM0_dg
[root@300 ~]# hares -state | grep TSM0 | grep _mnt_ | \
                while read I; do hares -display $I 2>&1 | grep -v ArgListValues | grep 'largefiles'; done | column -t
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
[root@300 ~]# hares -add    TSM0_server Application TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_server StartProgram   "/etc/init.d/tsm0 start"
[root@300 ~]# hares -modify TSM0_server StopProgram    "/etc/init.d/tsm0 stop"
[root@300 ~]# hares -modify TSM0_server MonitorProgram "/etc/init.d/tsm0 status"
[root@300 ~]# hares -modify TSM0_server Enabled 1
[root@300 ~]# hares -probe  TSM0_server -sys 301
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_active_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_archive_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_04
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_05
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_06
[root@300 ~]# hares -link   TSM0_server           TSM0_ip_bond0
[root@300 ~]# hares -link   TSM0_mnt_active_log   TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_archive_log  TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_01        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_02        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_03        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_01     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_02     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_03     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_04     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_05     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_06     TSM0_mnt_instance
[root@300 ~]# vxdg import TSM0_dg
[root@300 ~]# mount -t vxfs /dev/vx/dsk/TSM0_dg/TSM0_vol_instance /tsm0
[root@301 ~]# mkdir -p /tsm0/active_log
[root@301 ~]# mkdir -p /tsm0/archive_log
[root@300 ~]# mkdir -p /tsm0/db/db_01
[root@300 ~]# mkdir -p /tsm0/db/db_02
[root@300 ~]# mkdir -p /tsm0/db/db_03
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_01
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_02
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_01
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_02
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_04
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_05
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_06
[root@300 ~]# find /tsm0
/tsm0
/tsm0/lost+found
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# umount /tsm0
[root@300 ~]# vxdg deport TSM0_dg
[root@300 ~]# haconf -dump -makero
[root@300 ~]# grep TSM0_server /etc/VRTSvcs/conf/config/main.cf
        Application TSM0_server (
        TSM0_server requires TSM0_ip_bond0
        TSM0_server requires TSM0_mnt_active_log
        TSM0_server requires TSM0_mnt_archive_log
        TSM0_server requires TSM0_mnt_db_01
        TSM0_server requires TSM0_mnt_db_02
        TSM0_server requires TSM0_mnt_db_03
        TSM0_server requires TSM0_mnt_db_backup_01
        TSM0_server requires TSM0_mnt_db_backup_02
        TSM0_server requires TSM0_mnt_db_backup_03
        TSM0_server requires TSM0_mnt_instance
        TSM0_server requires TSM0_mnt_pool0_01
        TSM0_server requires TSM0_mnt_pool0_02
        TSM0_server requires TSM0_mnt_pool0_03
        TSM0_server requires TSM0_mnt_pool0_04
        TSM0_server requires TSM0_mnt_pool0_05
        TSM0_server requires TSM0_mnt_pool0_06
        //      Application TSM0_server

Local Per Node Resources

[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# cat /etc/fstab
/dev/mapper/vg_local-lv_root              /           ext3 rw,noatime,nodiratime      1 1
UUID=28d0988a-e6d7-48d8-b0e5-0f70f8eb681e /boot       ext3 defaults                   1 2
UUID=D401-661A                            /boot/efi   vfat umask=0077,shortname=winnt 0 0
/dev/vg_local/lv_swap                     swap        swap defaults                   0 0
/dev/vg_local/lv_tmp                      /tmp        ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_opt_tivoli               /opt/tivoli ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_home                     /home       ext3 rw,noatime,nodiratime      2 2

# VIRT
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Install IBM TSM Server Dependencies.

[root@ANY ~]# yum install numactl
[root@ANY ~]# yum install /usr/lib/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install /usr/lib64/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install xorg-x11-xauth xterm fontconfig libICE \
                          libX11-common libXau libXmu libSM libX11 libXt

System /etc/sysctl.conf parameters for both nodes.

[root@300 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0
[root@301 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0

Install IBM TSM Server

Connect to each node with SSH Forwarding enabled and install IBM TSM server.

[root@300 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./install.sh

… and the second node.

[root@301 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./install.sh

Options choosen during installation.

INSTALL | DESELECT 'Languages' and DESELECT 'Operations Center'
INSTALL | /opt/tivoli/IBM/IBMIMShared
INSTALL | /opt/tivoli/IBM/InstallationManager/eclipse
INSTALL | /opt/tivoli/tsm

Screenshots from the installation process.

ibm-tsm-install-01

ibm-tsm-install-02

ibm-tsm-install-03

ibm-tsm-install-04

ibm-tsm-install-05

ibm-tsm-install-06

Install IBM TSM Client

[root@300 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm
[root@301 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm

Nodes Configuration for IBM TSM Server

[root@300 ~]# useradd -u 1500 -m tsm0
[root@301 ~]# useradd -u 1500 -m tsm0
[root@300 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@301 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@300 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash

[root@301 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash
[root@300 ~]# tail -1 /etc/group
tsm0:x:1500:

[root@301 ~]# tail -1 /etc/group
tsm0:x:1500:
[root@300 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768

[root@301 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768
[root@300 ~]# :> /var/run/dsmserv_tsm0.pid
[root@301 ~]# :> /var/run/dsmserv_tsm0.pid
[root@300 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@301 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@300 ~]# hares -state | grep TSM
TSM0_dg               State                 300  OFFLINE
TSM0_dg               State                 301  OFFLINE
TSM0_ip_bond0         State                 300  OFFLINE
TSM0_ip_bond0         State                 301  OFFLINE
TSM0_mnt_active_log   State                 300  OFFLINE
TSM0_mnt_active_log   State                 301  OFFLINE
TSM0_mnt_archive_log  State                 300  OFFLINE
TSM0_mnt_archive_log  State                 301  OFFLINE
TSM0_mnt_db_01        State                 300  OFFLINE
TSM0_mnt_db_01        State                 301  OFFLINE
TSM0_mnt_db_02        State                 300  OFFLINE
TSM0_mnt_db_02        State                 301  OFFLINE
TSM0_mnt_db_03        State                 300  OFFLINE
TSM0_mnt_db_03        State                 301  OFFLINE
TSM0_mnt_db_backup_01 State                 300  OFFLINE
TSM0_mnt_db_backup_01 State                 301  OFFLINE
TSM0_mnt_db_backup_02 State                 300  OFFLINE
TSM0_mnt_db_backup_02 State                 301  OFFLINE
TSM0_mnt_db_backup_03 State                 300  OFFLINE
TSM0_mnt_db_backup_03 State                 301  OFFLINE
TSM0_mnt_instance     State                 300  OFFLINE
TSM0_mnt_instance     State                 301  OFFLINE
TSM0_mnt_pool0_01     State                 300  OFFLINE
TSM0_mnt_pool0_01     State                 301  OFFLINE
TSM0_mnt_pool0_02     State                 300  OFFLINE
TSM0_mnt_pool0_02     State                 301  OFFLINE
TSM0_mnt_pool0_03     State                 300  OFFLINE
TSM0_mnt_pool0_03     State                 301  OFFLINE
TSM0_mnt_pool0_04     State                 300  OFFLINE
TSM0_mnt_pool0_04     State                 301  OFFLINE
TSM0_mnt_pool0_05     State                 300  OFFLINE
TSM0_mnt_pool0_05     State                 301  OFFLINE
TSM0_mnt_pool0_06     State                 300  OFFLINE
TSM0_mnt_pool0_06     State                 301  OFFLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 300  OFFLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# hares -online TSM0_mnt_instance -sys $( hostname -s )
[root@300 ~]# hares -online TSM0_ip_bond0     -sys $( hostname -s )
[root@300 ~]# hares -state | grep TSM0 | grep 301 | grep mnt | grep -v instance | awk '{print $1}' \
                | while read I; do hares -online ${I} -sys $( hostname -s ); done
[root@300 ~]# hares -state | grep 301 | grep TSM0
TSM0_dg               State                 301  ONLINE
TSM0_ip_bond0         State                 301  ONLINE
TSM0_mnt_active_log   State                 301  ONLINE
TSM0_mnt_archive_log  State                 301  ONLINE
TSM0_mnt_db_01        State                 301  ONLINE
TSM0_mnt_db_02        State                 301  ONLINE
TSM0_mnt_db_03        State                 301  ONLINE
TSM0_mnt_db_backup_01 State                 301  ONLINE
TSM0_mnt_db_backup_02 State                 301  ONLINE
TSM0_mnt_db_backup_03 State                 301  ONLINE
TSM0_mnt_instance     State                 301  ONLINE
TSM0_mnt_pool0_01     State                 301  ONLINE
TSM0_mnt_pool0_02     State                 301  ONLINE
TSM0_mnt_pool0_03     State                 301  ONLINE
TSM0_mnt_pool0_04     State                 301  ONLINE
TSM0_mnt_pool0_05     State                 301  ONLINE
TSM0_mnt_pool0_06     State                 301  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# find /tsm0 | grep -v 'lost+found'
/tsm0
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# chown -R tsm0:tsm0 /tsm0

IBM TSM Server Configuration

Connect to one of the nodes with SSH Forwarding enabled.

[root@300 ~]# cd /opt/tivoli/tsm/server/bin
[root@300 /opt/tivoli/tsm/server/bin]# ./dsmicfgx
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Options choosen during configuration.

INSTALL | Instance user ID:
INSTALL |  Β Β tsm0
INSTALL |
INSTALL | Instance directory:
INSTALL |  Β Β /tsm0
INSTALL |
INSTALL | Database directories:
INSTALL |  Β Β /tsm0/db/db_01
INSTALL |   Β /tsm0/db/db_02
INSTALL |   Β /tsm0/db/db_03
INSTALL |
INSTALL | Active log directory:
INSTALL |  Β Β /tsm0/active_log
INSTALL |
INSTALL | Primary archive log directory:
INSTALL |  Β Β /tsm0/archive_log
INSTALL |
INSTALL | Instance autostart setting:
INSTALL |  Β Β Start automatically using the instance user ID

Screenshots from the configuration process.

ibm-tsm-configure-01

ibm-tsm-configure-02

ibm-tsm-configure-03

ibm-tsm-configure-04

ibm-tsm-configure-05

ibm-tsm-configure-06

ibm-tsm-configure-07

ibm-tsm-configure-08

ibm-tsm-configure-09

Log from the IBM TSM DB2 instance creation.

Creating the database manager instance...
The database manager instance was created successfully.

Formatting the server database...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 5208.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server's database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown.

Format completed with return code 0
Beginning initial configuration...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 8741.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1636W The server machine GUID changed: old value (), new value (f0.8a.27.61-
.e5.43.b6.11.92.b5.00.0a.f7.49.31.18).
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server
password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2200I Storage pool BACKUPPOOL defined (device class DISK).
ANR2200I Storage pool ARCHIVEPOOL defined (device class DISK).
ANR2200I Storage pool SPACEMGPOOL defined (device class DISK).
ANR2560I Schedule manager started.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
ANR2094I Server name set to TSM0.
ANR4865W The server name has been changed. Windows clients that use "passworda-
ccess generate" may be unable to authenticate with the server.
ANR2068I Administrator ADMIN registered.
ANR2076I System privilege granted to administrator ADMIN.
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.

Configuration is complete.

Modify IBM TSM Server Startup Script

Modified startup script to properly work with Veritas Cluster Server with modification in blue below.

[root@300 ~]# cat /etc/init.d/tsm0
#!/bin/bash
#
# dsmserv       Start/Stop IBM Tivoli Storage Manager
#
# chkconfig: - 90 10
# description: Starts/Stops an IBM Tivoli Storage Manager Server instance
# processname: dsmserv
# pidfile: /var/run/dsmserv_instancename.pid

#***********************************************************************
# Distributed Storage Manager (ADSM)                                   *
# Server Component                                                     *
#                                                                      *
# IBM Confidential                                                     *
# (IBM Confidential-Restricted when combined with the Aggregated OCO   *
# Source Modules for this Program)                                     *
#                                                                      *
# OCO Source Materials                                                 *
#                                                                      *
# 5765-303 (C) Copyright IBM Corporation 1990, 2009                    *
#***********************************************************************

#
# This init script is designed to start a single Tivoli Storage Manager
# server instance on a system where multiple instances might be running.
# It assumes that the name of the script is also the name of the instance
# to be started (or, if the script name starts with Snn or Knn, where 'n'
# is a digit, that the name of the instance is the script name with the
# three letter prefix removed).
#
# To use the script to start multiple instances, install multiple copies
# of the script in /etc/rc.d/init.d, naming each copy after the instance
# it will start.
#
# The script makes a number of simplifying assumptions about the way
# the instance is set up.
# - The Tivoli Storage Manager Server instance runs as a non-root user whose
#   name is the instance name
# - The server's instance directory (the directory in which it keeps all of
#   its important state information) is in a subdirectory of the home
#   directory called tsminst1.
# If any of these assumptions are not valid, then the script will require
# some modifications to work.  To start with, look at the
# instance, instance_user, and instance_dir variables set below...

# First of all, check for syntax
if [[ $# != 1 ]]
then
  echo $"Usage: $0 {start|stop|status|restart}"
  exit 1
fi

prog="dsmserv"
instance=tsm0
serverBinDir="/opt/tivoli/tsm/server/bin"

if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system ($serverBinDir/$prog)"
   exit -1
fi

# see if $0 starts with Snn or Knn, where 'n' is a digit.  If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${instance:0:1} == S ]]
then
  instance=${instance#S[0123456789][0123456789]}
elif [[ ${instance:0:1} == K ]]
then
  instance=${instance#K[0123456789][0123456789]}
fi

instance_home=`${serverBinDir}/dsmfngr $instance 2>/dev/null`
if [[ -z "$instance_home" ]]
then
  instance_home="/home/${instance}"
fi
instance_user=tsm0
instance_dir=/tsm0
pidfile="/var/run/${prog}_${instance}.pid"

PATH=/sbin:/bin:/usr/bin:/usr/sbin:$serverBinDir

#
# Do some basic error checking before starting the server
#
# Is the server installed?
if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system"
   exit 0
fi

# Does the instance directory exist?
if [[ ! -d $instance_dir ]]
then
 echo "Instance directory ${instance_dir} does not exist"
 exit -1
fi
rc=0

SLEEP_INTERVAL=5
MAX_SLEEP_TIME=10

function check_pid_file()
{
    test -f $pidfile
}

function check_process()
{
    ps -p `cat $pidfile` > /dev/null
}

function check_running()
{
    check_pid_file && check_process
}

start() {
        # set the standard value for the user limits
        ulimit -c unlimited
        ulimit -d unlimited
        ulimit -f unlimited
        ulimit -n 65536
        ulimit -t unlimited
        ulimit -u 16384

        echo -n "Starting $prog instance $instance ... "
        #if we're already running, say so
        status 0
        if [[ $g_status == "running" ]]
        then
           echo "$prog instance $instance already running..."
           exit 0
        else
           $serverBinDir/rc.dsmserv -u $instance_user -i $instance_dir -q >/dev/null 2>&1 &
           # give enough time to server to start
           sleep 5
           # if the lock file got created, we did ok
           if [[ -f $instance_dir/dsmserv.v6lock ]]
           then
              gawk --source '{print $4}' $instance_dir/dsmserv.v6lock>$pidfile
              [ $? = 0 ] && echo "Succeeded" || echo "Failed"
              rc=$?
              echo
              [ $rc -eq 0 ] && touch /var/lock/subsys/${instance}
              return $rc
           else
              echo "Failed"
              return 1
           fi
       fi
}

stop() {
        echo  "Stopping $prog instance $instance ..."
        if [[ -e $pidfile ]]
        then
           # make sure someone else didn't kill us already
           progpid=`cat $pidfile`
           running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
           if [[ -n $running ]]
           then
              #echo "executing cmd kill `cat $pidfile`"
              kill `cat $pidfile`

              total_slept=0
              while check_running; do \
                  echo  "$prog instance $instance still running, will check after $SLEEP_INTERVAL seconds"
                  sleep $SLEEP_INTERVAL
                  total_slept=`expr $total_slept + 1`

                  if [ "$total_slept" -gt "$MAX_SLEEP_TIME" ]; then \
                      break
                  fi
              done

              if  check_running
              then
                echo "Unable to stop $prog instance $instance"
                exit 1
              else
                echo "$prog instance $instance stopped Successfully"
              fi
           fi
           # remove the pid file so that we don't try to kill same pid again
           rm $pidfile
           if [[ $? != 0 ]]
           then
              echo "Process $prog instance $instance stopped, but unable to remove $pidfile"
              echo "Be sure to remove $pidfile."
              exit 1
           fi
        else
           echo "$prog instance $instance is not running."
        fi
        rc=$?
        echo
        [ $rc -eq 0 ] && rm -f /var/lock/subsys/${instance}
        return $rc
}

status() {
      # check usage
      if [[ $# != 1 ]]
      then
         echo "$0: Invalid call to status routine. Expected argument: "
         echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
         exit 100
         # exit 1
      fi
      #see if file $pidfile exists
      # if it does, see if process is running
      # if it doesn't, it's not running - or at least was not started by dsmserv.rc
      if [[ -e $pidfile ]]
      then
         progpid=`cat $pidfile`
         running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
         if [[ -n $running ]]
         then
            g_status="running"
         else
            g_status="stopped"
            # remove the pidfile if stopped.
            if [[ -e $pidfile ]]
            then
                rm $pidfile
                if [[ $? != 0 ]]
                then
                    echo "$prog instance $instance stopped, but unable to remove $pidfile"
                    echo "Be sure to remove $pidfile."
                fi
            fi
         fi
      else
        g_status="stopped"
      fi
      if [[ $1 == 1 ]]
      then
            echo "Status of $prog instance $instance: $g_status"
      fi

      if [ "${1}" = "1" ]
      then
        case ${g_status} in
          (stopped) EXIT=100 ;;
          (running) EXIT=110 ;;
        esac
        exit ${EXIT}
      fi
}

restart() {
        stop
        start
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  status)
        status 1
        ;;
  restart|reload)
        restart
        ;;
  *)
        echo $"Usage: $0 {start|stop|status|restart}"
        exit 1
esac

exit $?

… and the diff(1) between original and modified one.

[root@300 ~]# diff -u /etc/init.d/tsm0 /root/tsm0
--- /etc/init.d/tsm0    2016-07-13 13:20:43.000000000 +0200
+++ /root/tsm0          2016-07-13 13:27:41.000000000 +0200
@@ -207,7 +207,8 @@
       then
          echo "$0: Invalid call to status routine. Expected argument: "
          echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
-         exit 1
+         exit 100
+         # exit 1
       fi
       #see if file $pidfile exists
       # if it does, see if process is running
@@ -239,6 +240,15 @@
       then
             echo "Status of $prog instance $instance: $g_status"
       fi
+
+      if [ "${1}" = "1" ]
+      then
+        case ${g_status} in
+          (stopped) EXIT=100 ;;
+          (running) EXIT=110 ;;
+        esac
+        exit ${EXIT}
+      fi
 }

 restart() {

Copy tsm0 Profile to the Other Node

[root@300 ~]# pwd
/home
[root@300 /home]# tar -czf - tsm0 | ssh 301 'tar -C /home -xzf -'
[root@300 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@301 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0

IBM TSM Server Start

[root@300 ~]# hares -online TSM0_ip_bond0         -sys 300
[root@300 ~]# hares -online TSM0_mnt_active_log   -sys 300
[root@300 ~]# hares -online TSM0_mnt_archive_log  -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_01        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_02        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_03        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_instance     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_01     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_02     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_03     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_04     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_05     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_06     -sys 300
[root@300 ~]# hares -state | grep TSM0 | grep 300
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@300 ~]# cat >> /etc/services << __EOF
DB2_tsm0        60000/tcp
DB2_tsm0_1      60001/tcp
DB2_tsm0_2      60002/tcp
DB2_tsm0_3      60003/tcp
DB2_tsm0_4      60004/tcp
DB2_tsm0_END    60005/tcp
__EOF
[root@300 ~]# hagrp -freeze TSM0_site
[root@300 ~]# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  300            RUNNING              0
A  301            RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  NSR_site        300            Y          N               OFFLINE
B  NSR_site        301            Y          N               ONLINE
B  RMAN_site       300            Y          N               OFFLINE
B  RMAN_site       301            Y          N               ONLINE
B  TSM0_site       300            Y          N               PARTIAL
B  TSM0_site       301            Y          N               OFFLINE
B  VCS_site        300            Y          N               OFFLINE
B  VCS_site        301            Y          N               ONLINE

-- GROUPS FROZEN
-- Group

C  TSM0_site

-- RESOURCES DISABLED
-- Group           Type            Resource

H  TSM0_site      Application     TSM0_server
H  TSM0_site      DiskGroup       TSM0_dg
H  TSM0_site      IP              TSM0_ip_bond0
H  TSM0_site      Mount           TSM0_mnt_active_log
H  TSM0_site      Mount           TSM0_mnt_archive_log
H  TSM0_site      Mount           TSM0_mnt_db_01
H  TSM0_site      Mount           TSM0_mnt_db_02
H  TSM0_site      Mount           TSM0_mnt_db_03
H  TSM0_site      Mount           TSM0_mnt_db_backup_01
H  TSM0_site      Mount           TSM0_mnt_db_backup_02
H  TSM0_site      Mount           TSM0_mnt_db_backup_03
H  TSM0_site      Mount           TSM0_mnt_instance
H  TSM0_site      Mount           TSM0_mnt_pool0_01
H  TSM0_site      Mount           TSM0_mnt_pool0_02
H  TSM0_site      Mount           TSM0_mnt_pool0_03
H  TSM0_site      Mount           TSM0_mnt_pool0_04
H  TSM0_site      Mount           TSM0_mnt_pool0_05
H  TSM0_site      Mount           TSM0_mnt_pool0_06
H  TSM0_site      NIC             TSM0_nic_bond0

[root@300 ~]# su - tsm0 -c '/opt/tivoli/tsm/server/bin/dsmserv -i /tsm0'
ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 9834.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message
catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1635I The server machine GUID, 54.80.e8.50.e4.48.e6.11.8e.6d.00.0a.f7.49.2b.08, has
initialized.
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2803I License manager started.
ANR8200I TCP/IP Version 4 driver ready for connection with clients on port 1500.
ANR9639W Unable to load Shared License File dsmreg.sl.
ANR9652I An EVALUATION LICENSE for IBM System Storage Archive Manager will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Basic Edition will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Extended Edition will expire on
08/13/2016.
ANR2828I Server is licensed to support IBM System Storage Archive Manager.
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.
ANR2560I Schedule manager started.
ANR0984I Process 1 for EXPIRE INVENTORY (Automatic) started in the BACKGROUND at 01:58:03 PM.
ANR0811I Inventory client file expiration started as process 1.
ANR0167I Inventory file expiration process 1 processed for 0 minutes.
ANR0812I Inventory file expiration process 1 completed: processed 0 nodes, examined 0 objects,
deleting 0 backup objects, 0 archive objects, 0 DB backup volumes, and 0 recovery plan files. 0
objects were retried and 0 errors were encountered.
ANR0985I Process 1 for EXPIRE INVENTORY (Automatic) running in the BACKGROUND completed with
completion state SUCCESS at 01:58:03 PM.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
TSM:TSM0>q admin
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY ADMIN

Administrator        Days Since       Days Since      Locked?       Privilege Classes
Name                Last Access     Password Set
--------------     ------------     ------------     ----------     -----------------------
ADMIN                        <1               <1         No         System
ADMIN_CENTER                 halt
ANR2017I Administrator SERVER_CONSOLE issued command: HALT
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.
ANR0991I Server shutdown complete.


[root@300 ~]# hagrp -unfreeze TSM0_site

[root@300 ~]# hares -state | grep TSM0 | grep 302
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@301 ~]# hares -online TSM0_server -sys 300

Ignore these errors below during first IBM TSM server startup.

IGNORE | ERRORS TO IGNORE DURING FIRST IBM TSM SERVER START
IGNORE | 
IGNORE | DBI1306N  The instance profile is not defined.
IGNORE |
IGNORE | Explanation:
IGNORE |
IGNORE | The instance is not defined in the target machine registry.
IGNORE |
IGNORE | User response:
IGNORE |
IGNORE | Specify an existing instance name or create the required instance.

Install IBM TSM Server Licenses

Screenshots from that process below.

ibm-tsm-install-license-01

ibm-tsm-install-license-02

ibm-tsm-install-license-03

ibm-tsm-install-license-04

Lets now register licenses for the IBM TSM.

tsm: TSM0_SITE>register license file=/opt/tivoli/tsm/server/bin/tsmee.lic
ANR2852I Current license information:
ANR2853I New license information:
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.

IBM TSM Client Configuration on the IBM TSM Server Nodes

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

Install lin_tape on IBM TSM Server

[root@ALL]# uname -r
2.6.32-504.el6.x86_64

[root@ALL]# uname -r | sed 's|.x86_64||g'
2.6.32-504.el6

[root@ALL]# yum --showduplicates list kernel-devel | grep 2.6.32-504.el6
kernel-devel.x86_64            2.6.32-504.el6                 rhel-6-server-rpms

[root@ALL]# yum install rpm-build kernel-devel-2.6.32-504.el6

[root@ALL]# rpm -Uvh /root/rpmbuild/RPMS/x86_64/lin_tape-3.0.10-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_tape               ########################################### [100%]
Starting lin_tape...
lin_tape loaded

[root@ALL]# rpm -Uvh lin_taped-3.0.10-rhel6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_taped              ########################################### [100%]
Starting lin_tape...
lin_taped loaded

[root@ALL]# /etc/init.d/lin_tape start
Starting lin_tape... lin_taped already running. Abort!

[root@ALL]# /etc/init.d/lin_tape restart
Shutting down lin_tape... lin_taped unloaded
Starting lin_tape...

Library Configuration

This is quite unusual configuration as the IBM TS3310 library with 4 LTO4 drives are logically partitioned into two logical libraries with 2 drives dedicated to Dell/EMC Networker and 2 drives dedicated to the IBM TSM server. Such library is shown below.

ibm-tsm-ts3310.jpg

The changers and tape drives for each backup system.

Networker | (L) 000001317577_LLA changer0
TSM       | (L) 000001317577_LLB changer1_persistent_TSM0
Networker | (1) 7310132058       tape0
Networker | (2) 7310295146       tape1
TSM       | (3) 7310214751       tape2_persistent_TSM0
TSM       | (4) 7310214904       tape3_persistent_TSM0
[root@300 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape3
/dev/IBMtape3n

We will use UDEV for persistent configuration.

[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape0)    | grep -i serial
    ATTR{serial_num}=="7310132058"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape1)    | grep -i serial
    ATTR{serial_num}=="7310295146"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape2)    | grep -i serial
    ATTR{serial_num}=="7310214751"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape3)    | grep -i serial
    ATTR{serial_num}=="7310214904"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger0) | grep -i serial
    ATTR{serial_num}=="000001317577_LLA"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger1) | grep -i serial
    ATTR{serial_num}=="000001317577_LLB"
[root@300 ~]# cat /proc/scsi/IBM*
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Changer Devices:
Number  model       SN                HBA             SCSI            FO Path
0       3576-MTL    000001317577_LLA  qla2xxx         2:0:1:1         NA
1       3576-MTL    000001317577_LLB  qla2xxx         4:0:1:1         NA
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Tape Devices:
Number  model       SN                HBA             SCSI            FO Path
0       ULT3580-TD4 7310132058        qla2xxx         2:0:0:0         NA
1       ULT3580-TD4 7310295146        qla2xxx         2:0:1:0         NA
2       ULT3580-TD4 7310214751        qla2xxx         4:0:0:0         NA
3       ULT3580-TD4 7310214904        qla2xxx         4:0:1:0         NA

[root@300 ~]# cat /etc/udev/rules.d/98-lin_tape.rules
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310132058", MODE="0660", SYMLINK="IBMtape0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310295146", MODE="0660", SYMLINK="IBMtape1"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214751", MODE="0660", SYMLINK="IBMtape2_persistent_TSM0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214904", MODE="0660", SYMLINK="IBMtape3_persistent_TSM0"
KERNEL=="IBMchanger*", ATTR{serial_num}=="000001317577_LLB", MODE="0660", SYMLINK="IBMchanger1_persistent_TSM0"

[root@301 ~]# /etc/init.d/lin_tape stop
Shutting down lin_tape... lin_taped unloaded

[root@301 ~]# rmmod lin_tape

[root@301 ~]# /etc/init.d/lin_tape start
Starting lin_tape...

New persistent devices.

[root@301 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMchanger1_persistent_TSM0
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape2_persistent_TSM0
/dev/IBMtape3
/dev/IBMtape3n
/dev/IBMtape3_persistent_TSM0

Lets update the paths to the tape drives now.

tsm: TSM0_SITE>query path f=d

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: TS3310
              Destination Type: LIBRARY
                       Library:
                     Node Name:
                        Device: /dev/IBMchanger0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): ADMIN
         Last Update Date/Time: 09/16/2014 13:36:14

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE0
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 14:02:02

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE1
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape1
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 13:59:48

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=no
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary device=/dev/IBMchanger1_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=yes
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=no
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape2_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE0           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE0 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape3_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.


Lets verify that our library works properly.

tsm: TSM0_SITE>audit library TS3310 checklabel=barcode
ANS8003I Process number 2 started.

tsm: TSM0_SITE>query proc

Process      Process Description      Process Status
  Number
--------     --------------------     -------------------------------------------------
       2     AUDIT LIBRARY            ANR8459I Auditing volume inventory for library
                                       TS3310.


tsm: TSM0_SITE>query act
(...)

08/04/2016 14:30:41      ANR2017I Administrator ADMIN issued command: AUDIT
                          LIBRARY TS3310 checklabel=barcode  (SESSION: 8)
08/04/2016 14:30:41      ANR0984I Process 2 for AUDIT LIBRARY started in the
                          BACKGROUND at 02:30:41 PM. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:41      ANR8457I AUDIT LIBRARY: Operation for library TS3310
                          started as process 2. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:46      ANR8358E Audit operation is required for library TS3310.
                          (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:51      ANR8439I SCSI library TS3310 is ready for operations.
                          (SESSION: 8, PROCESS: 2)

(...)

08/04/2016 14:31:26      ANR0985I Process 2 for AUDIT LIBRARY running in the
                          BACKGROUND completed with completion state SUCCESS at
                          02:31:26 PM. (SESSION: 8, PROCESS: 2)

(...)

IBM TSM Storage Pool Configuration

IBM TSM container storage pool creation.

tsm: TSM0_SITE>define stgpool POOL0_stgFC stgtype=directory
ANR2249I Storage pool POOL0_stgFC is defined.

tsm: TSM0_SITE>define stgpooldirectory POOL0_stgFC /tsm0/pool0/pool0_01,/tsm0/pool0/pool0_02,/tsm0/pool0/pool0_03,/tsm0/pool0/pool0_04,/tsm0/pool0/pool0_05,/tsm0/pool0/pool0_06
ANR3254I Storage pool directory /tsm0/pool0/pool0_01 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_02 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_03 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_04 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_05 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_06 was defined in storage pool POOL0_stgFC.

tsm: TSM0_SITE>q stgpooldirectory

Storage Pool Name     Directory                                         Access
-----------------     ---------------------------------------------     ------------
POOL0_stgFC           /tsm0/pool0/pool0_01                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_02                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_03                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_04                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_05                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_06                              Read/Write


IBM TSM Backup Policies Configuration

Below is an example policy.

tsm: TSM0_SITE>def dom  FS backret=30 archret=30
ANR1500I Policy domain FS defined.

tsm: TSM0_SITE>def pol  FS FS
ANR1510I Policy set FS defined in policy domain FS.

tsm: TSM0_SITE>def mg   FS FS FS_1DAY
ANR1520I Management class FS_1DAY defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1DAY   STANDARD type=backup destination=POOL0_STGFC verexists=32 verdeleted=1 retextra=31 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1DAY.

tsm: TSM0_SITE>def mg   FS FS FS_1MONTH
ANR1520I Management class FS_1MONTH defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1MONTH STANDARD type=backup destination=POOL0_STGFC  verexists=4 verdeleted=1 retextra=91 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1MONTH.

tsm: TSM0_SITE>as defmg FS FS FS_1DAY
ANR1538I Default management class set to FS_1DAY for policy domain FS, set FS.

tsm: TSM0_SITE>act pol  FS FS
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.

Do you wish to proceed? (Yes (Y)/No (N)) y
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.
ANR1514I Policy set FS activated in policy domain FS.



I hope that the amount of instructions did not discouraged you from one of the best enterprise backup systems – the IBM TSM (now IBM Spectrum Protect) and on of the best high availability cluster – the Veritas Cluster Server πŸ™‚

EOF

Bareos Backup Server on FreeBSD

Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula?

bareos-logo

If you are interested in more enterprise backup solution then check IBM TSM (Spectrum Protect) on Veritas Cluster Server article.

Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages.

I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group).

bareos-sched-levels-256.png

This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that.

The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result.

bareos-sched-cross-256.png

Not bad for my taste.

Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine:

  • bareos-dir
  • bareos-sd
  • bareos-webui
  • bareos-fd

I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares.

To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers.

Also this diagram may be useful for You to get some grip into the Bareos world.

bareos-overview-small

System

As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.

freebsd-nakatomi.jpg

Sorry couldn’t resist πŸ™‚

Here are 3 most important configuration files after some time in vi(1) with them.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
# postgresql_enable=YES
# postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

As You can see all ‘core’ services for Bareos are currently disabled on purpose. We will enable them later.

Parameters and modules to be set at boot.

root@replica:~ # cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=2
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# MODULES
  zfs_load=YES

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

Parameters to be set at runtime.

root@replica:~ # cat /etc/sysctl.conf
# SECURITY
  security.bsd.see_other_uids=0
  security.bsd.see_other_gids=0
  security.bsd.unprivileged_read_msgbuf=0
  security.bsd.unprivileged_proc_debug=0
  security.bsd.stack_guard_page=1
  kern.randompid=9100

# ZFS
  vfs.zfs.min_auto_ashift=12

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# IPC
  kern.ipc.shmall=524288
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

After install we will disable the /zroot mounting.

root@replica:/ # zfs set mountpoint=none zroot

As we have sendmail(8) disabled we will need to take care of its queue.

root@replica:~ # cat > /etc/cron.d/sendmail-clean-clientmqueue << __EOF
# CLEAN SENDMAIL
0 * * * * root /bin/rm -r -f /var/spool/clientmqueue/*
__EOF

Assuming the NFS servers configured in the /etc/hosts file the ‘complete’ /etc/hosts file would look like that.

root@replica:~ # grep '^[^#]' /etc/hosts
::1        localhost localhost.my.domain
127.0.0.1  localhost localhost.my.domain
10.0.10.40 replica.backup.org replica
10.0.10.50 nfs-pri.backup.org nfs-pri
10.0.20.50 nfs-sec.backup.org nfs-sec

Lets verify outside world connectivity – needed for adding the Bareos packages.

root@replica:~ # nc -v bareos.org 443
Connection to bareos.org 443 port [tcp/https] succeeded!
^C
root@replica:~ #

Packages

As we want the latest packages we will modify the /etc/pkg/FreeBSD.conf – the pkg(8) repository file for the latest packages.

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

root@replica:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We will use Bareos packages from pkg(8) as they are available, no need to waste time and power on compilation.

root@replica:~ # pkg search bareos
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
(...)
bareos-bat-16.2.7              Backup archiving recovery open sourced (GUI)
bareos-client-16.2.7           Backup archiving recovery open sourced (client)
bareos-client-static-16.2.7    Backup archiving recovery open sourced (static client)
bareos-docs-16.2.7             Bareos document set (PDF)
bareos-server-16.2.7           Backup archiving recovery open sourced (server)
bareos-traymonitor-16.2.7      Backup archiving recovery open sourced (traymonitor)
bareos-webui-16.2.7            PHP-Frontend to manage Bareos over the web

Now we will install Bareos along with all needed components for its environment.

root@replica:~ # pkg install \
  bareos-client bareos-server bareos-webui postgresql95-server nginx \
  php56 php56-xml php56-session php56-simplexml php56-gd php56-ctype \
  php56-mbstring php56-zlib php56-tokenizer php56-iconv php56-mcrypt \
  php56-pear-DB_ldap php56-zip php56-dom php56-sqlite3 php56-gettext \
  php56-curl php56-json php56-opcache php56-wddx php56-hash php56-soap

The bareos, pgsql and www users have been added by pkg(8) along with their packages.

root@replica:~ # id bareos
uid=997(bareos) gid=997(bareos) groups=997(bareos)

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql)

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www)

PostgreSQL

First we will setup the PostgreSQL database.

We will add separate pgsql login class for PostgreSQL database user.

root@replica:~ # cat >> /etc/login.conf << __EOF
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

__EOF

This is one of the rare occasions when I would appreciate the -p flag from the AIX grep command to display whole paragraph πŸ˜‰

root@replica:~ # grep -B 1 -A 3 pgsql /etc/login.conf
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

Lets reload the login database.

root@replica:~ # cap_mkdb /etc/login.conf

Here are PostgreSQL rc(8) startup script ‘options’ that can be set in /etc/rc.conf file.

root@replica:~ # grep '#  postgresql' /usr/local/etc/rc.d/postgresql
#  postgresql_enable="YES"
#  postgresql_data="/usr/local/pgsql/data"
#  postgresql_flags="-w -s -m fast"
#  postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"
#  postgresql_class="default"
#  postgresql_profiles=""

We only need postgresql_enable and postgresql_class to be set.

We will enable them now in the /etc/rc.conf file.

root@replica:~ # grep -A 10 BAREOS /etc/rc.conf
# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

We will now init the PostgreSQL database for Bareos.

root@replica:~ # /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

… and start it.

root@replica:~ # /usr/local/etc/rc.d/postgresql start
LOG:  ending log output to stderr
HINT:  Future log output will go to log destination "syslog".

We will now take care of the Bareos server configuration. There are a lot *.sample files that we do not need. We also need to take care about permissions.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'
root@replica:~ # find /usr/local/etc/bareos -name \*\.sample -delete

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

For the ‘trace’ of our changes we will keep a copy of the original configuration to track what we have changed in the process of configuring our Bareos environment.

root@replica:~ # cp -a /usr/local/etc/bareos /usr/local/etc/bareos.ORG

Now, we would configure the Bareos Catalog in the /usr/local/etc/bareos.ORG/bareos-dir.d/catalog/MyCatalog.conf file, here are its contents after our modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Lets make sure that pgsql and www users are in the bareos group, to read its configuration files.

root@replica:~ # pw groupmod bareos -m pgsql

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql),997(bareos)

root@replica:~ # pw groupmod bareos -m www

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www),997(bareos)

Now, we will prepare the PostgreSQL database for out Bareos instance. We will use scripts provided by the Bareos package from the /usr/local/lib/bareos/scripts path.

root@replica:~ # su - pgsql

$ whoami
pgsql

$ /usr/local/lib/bareos/scripts/create_bareos_database
Creating postgresql database
CREATE DATABASE
ALTER DATABASE
Database encoding OK
Creating of bareos database succeeded.

$ /usr/local/lib/bareos/scripts/make_bareos_tables
Making postgresql tables
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
DELETE 0
INSERT 0 1
Creation of Bareos PostgreSQL tables succeeded.

$ /usr/local/lib/bareos/scripts/grant_bareos_privileges
Granting postgresql tables
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
Privileges for user bareos granted ON database bareos.

We can now verify that we have the needed database created.

root@replica:~ # su -m bareos -c 'psql -l'
                             List of databases
   Name    | Owner | Encoding  | Collate |    Ctype    | Access privileges 
-----------+-------+-----------+---------+-------------+-------------------
 bareos    | pgsql | SQL_ASCII | C       | C           | 
 postgres  | pgsql | UTF8      | C       | en_US.UTF-8 | 
 template0 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
 template1 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
(4 rows)

We will also add housekeeping script for PostgreSQL database and put it into crontab(1).

root@replica:~ # su - pgsql

$ whoami
pgsql

$ cat > /usr/local/pgsql/vacuum.sh  /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null
__EOF

$ chmod +x /usr/local/pgsql/vacuum.sh

$ cat /usr/local/pgsql/vacuum.sh
#! /bin/sh

/usr/local/bin/vacuumdb -a -z 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null

$ crontab -e

$ exit

root@replica:~ # cat /var/cron/tabs/pgsql
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.Be9j9VVCUa installed on Thu Apr 26 21:45:04 2018)
# (Cron version -- $FreeBSD$)
0 0 * * * /usr/local/pgsql/vacuum.sh

root@replica:~ # su -m pgsql -c 'crontab -l'
0 0 * * * /usr/local/pgsql/vacuum.sh

Storage

I assume that the primary storage would be mounted in the /bareos directory from one NFS server while Disaster Recovery site would be mounted as /bareos-dr from another NFS server. Below is example NFS configuration of these mount points.

root@replica:~ # mkdir /bareos /bareos-dr

root@replica:~ # mount -t nfs
nfs-pri.backup.org:/export/bareos on /bareos (nfs, noatime)
nfs-sec.backup.org:/export/bareos-dr on /bareos-dr (nfs, noatime)

root@replica:~ # cat >> /etc/fstab << __EOF
#DEV                                  #MNT        #FS  #OPTS                                                         #DP
nfs-pri.backup.org:/export/bareos     /bareos     nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
nfs-sec.backup.org:/export/bareos-dr  /bareos-dr  nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
__EOF

root@replica:~ # mkdir -p /bareos/bootstrap
root@replica:~ # mkdir -p /bareos/restore
root@replica:~ # mkdir -p /bareos/storage/FileStorage

root@replica:~ # mkdir -p /bareos-dr/bootstrap
root@replica:~ # mkdir -p /bareos-dr/restore
root@replica:~ # mkdir -p /bareos-dr/storage/FileStorage

root@replica:~ # chown -R bareos:bareos /bareos /bareos-dr

root@replica:~ # find /bareos /bareos-dr -ls | column -t
69194  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:42  /bareos
72239  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/restore
72240  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:42  /bareos/storage
72241  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/storage/FileStorage
72238  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/bootstrap
69195  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:43  /bareos-dr
72254  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:43  /bareos-dr/storage
72255  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:43  /bareos-dr/storage/FileStorage
72253  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/restore
72252  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/bootstrap

Bareos

As we already used BAREOS-DATABASE-PASSWORD for the bareos user on PostgreSQL’s Bareos database we will use these passwords for the remaining parts of the Bareos subsystems. I think that these passwords are self explaining for what Bareos components they are πŸ™‚

  • BAREOS-DATABASE-PASSWORD
  • BAREOS-DIR-PASSWORD
  • BAREOS-SD-PASSWORD
  • BAREOS-FD-PASSWORD
  • BAREOS-MON-PASSWORD
  • ADMIN-PASSWORD

We will now configure all these Bareos subsystems.

We already modified the MyCatalog.conf file, here are its contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Contents of the /usr/local/etc/bareos/bconsole.d/bconsole.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bconsole.d/bconsole.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = replica.backup.org
  address = localhost
  Password = "BAREOS-DIR-PASSWORD"
  Description = "Bareos Console credentials for local Director"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  QueryFile = "/usr/local/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 100
  Password = "BAREOS-DIR-PASSWORD"
  Messages = Daemon
  Auditing = yes

  # Enable the Heartbeat if you experience connection losses
  # (eg. because of your router or firewall configuration).
  # Additionally the Heartbeat can be enabled in bareos-sd and bareos-fd.
  #
  # Heartbeat Interval = 1 min

  # remove comment in next line to load dynamic backends from specified directory
  # Backend Directory = /usr/local/lib

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all director plugins (*-dir.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf
Job {
  Name = "RestoreFiles"
  Description = "Standard Restore."
  Type = Restore
  Client = Default
  FileSet = "SelfTest"
  Storage = File
  Pool = BR-MO
  Messages = Standard
  Where = /bareos/restore
  Accurate = yes
}

New /usr/local/etc/bareos/bareos-dir.d/client/Default.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/Default.conf
Client {
  Name = Default
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

New /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf
Client {
  Name = replica.backup.org
  Description = "Client resource of the Director itself."
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/BackupCatalog.conf
Job {
  Name = "BackupCatalog"
  Description = "Backup the catalog database (after the nightly save)"
  JobDefs = "DefaultJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"

  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl 
  RunBeforeJob = "/usr/local/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"

  # This deletes the copy of the catalog
  RunAfterJob  = "/usr/local/lib/bareos/scripts/delete_catalog_backup"

  # This sends the bootstrap via mail for disaster recovery.
  # Should be sent to another system, please change recipient accordingly
  Write Bootstrap = "|/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \" -s \"Bootstrap for Job %j\" root@localhost" # (#01)
  Priority = 11                   # run after main backup
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Standard.conf
Messages {
  Name = Standard
  Description = "Reasonable message delivery -- send most everything to email address and to the console."
  operatorcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: Intervention needed for %j\" %r"
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: %t %e of %c %l\" %r"
  operator = root@localhost = mount                                 # (#03)
  mail = root@localhost = all, !skipped, !saved, !audit             # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Daemon.conf
Messages {
  Name = Daemon
  Description = "Message delivery for daemon messages (no job)."
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos daemon message\" %r"
  mail = root@localhost = all, !skipped, !audit # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !audit
  append = "/var/log/bareos/bareos-audit.log" = audit
}

Pools

By default Bareos comes with four pools configured, we would not use them so we will delete their configuration files.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/pool
total 14
-rw-rw----  1 bareos  bareos  536 Apr 16 08:14 Differential.conf
-rw-rw----  1 bareos  bareos  512 Apr 16 08:14 Full.conf
-rw-rw----  1 bareos  bareos  534 Apr 16 08:14 Incremental.conf
-rw-rw----  1 bareos  bareos   48 Apr 16 08:14 Scratch.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/pool/*.conf

We will now create two our pools for the DAILY backups and for the MONTHLY backups.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-DAILY-POOL.conf
Pool {
  Name = BR-DA
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 7 days           # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-DA-"             # Volumes will be labeled "BR-DA-"
}

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-MONTHLY-POOL.conf
Pool {
  Name = BR-MO
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 120 days         # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-MO-"             # Volumes will be labeled "BR-MO-"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Differential
  Client = Default
  FileSet = "SelfTest"
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Pool = BR-DA
  Priority = 10
  Write Bootstrap = "/bareos/bootstrap/%c.bsr"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/storage/File.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/storage/File.conf
Storage {
  Name = File
  Address = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Device = FileStorage
  Media Type = File
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf file after modifications.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf
Console {
  Name = bareos-mon
  Description = "Restricted console used by tray-monitor to get the status of the director."
  Password = "BAREOS-MON-PASSWORD"
  CommandACL = status, .status
  JobACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf
FileSet {
  Name = "Catalog"
  Description = "Backup the catalog dump and Bareos configuration files."
  Include {
    Options {
      signature = MD5
      Compression = lzo
    }
    File = "/var/db/bareos"
    File = "/usr/local/etc/bareos"
  }
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf
FileSet {
  Name = "SelfTest"
  Description = "fileset just to backup some files for selftest"
  Include {
    Options {
      Signature   = MD5
      Compression = lzo
    }
    File = "/usr/local/sbin"
  }
}

We do not need bundled LinuxAll.conf and WindowsAllDrives.conf filesets so we will delete them.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/fileset/
total 18
-rw-rw----  1 bareos  bareos  250 Apr 27 02:25 Catalog.conf
-rw-rw----  1 bareos  bareos  765 Apr 16 08:14 LinuxAll.conf
-rw-rw----  1 bareos  bareos  210 Apr 27 02:27 SelfTest.conf
-rw-rw----  1 bareos  bareos  362 Apr 16 08:14 WindowsAllDrives.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/LinuxAll.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/WindowsAllDrives.conf

We will now define two new filesets Windows.conf and UNIX.conf files.

New /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf
FileSet {
  Name = Windows
  Enable VSS = yes
  Include {
    Options {
      Signature = MD5
      Drive Type = fixed
      IgnoreCase = yes
      WildFile = "[A-Z]:/pagefile.sys"
      WildDir  = "[A-Z]:/RECYCLER"
      WildDir  = "[A-Z]:/$RECYCLE.BIN"
      WildDir  = "[A-Z]:/System Volume Information"
      Exclude = yes
      Compression = lzo
    }
    File = /
  }
}

New /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf
FileSet {
  Name = "UNIX"
  Include {
    Options {
      Signature = MD5 # calculate md5 checksum per file
      One FS = No     # change into other filessytems
      FS Type = ufs
      FS Type = btrfs
      FS Type = ext2  # filesystems of given types will be backed up
      FS Type = ext3  # others will be ignored
      FS Type = ext4
      FS Type = reiserfs
      FS Type = jfs
      FS Type = xfs
      FS Type = zfs
      noatime = yes
      Compression = lzo
    }
    File = /
  }
  # Things that usually have to be excluded
  # You have to exclude /tmp
  # on your bareos server
  Exclude {
    File = /var/db/bareos
    File = /tmp
    File = /proc
    File = /sys
    File = /var/tmp
    File = /.journal
    File = /.fsck
  }
}

File below is left unchanged.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/profile/operator.conf
Profile {
   Name = operator
   Description = "Profile allowing normal Bareos operations."

   Command ACL = !.bvfs_clear_cache, !.exit, !.sql
   Command ACL = !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount
   Command ACL = *all*

   Catalog ACL = *all*
   Client ACL = *all*
   FileSet ACL = *all*
   Job ACL = *all*
   Plugin Options ACL = *all*
   Pool ACL = *all*
   Schedule ACL = *all*
   Storage ACL = *all*
   Where ACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all
  Description = "Send all messages to the Director."
}

We will add /bareos/storage/FileStorage path as out FileStorage place for backups.

Contents of the /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf
Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /bareos/storage/FileStorage
  LabelMedia = yes;                   # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf
Storage {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-SD-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Description = "Director, who is permitted to contact this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all, !skipped, !restored
  Description = "Send relevant messages to the Director."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
  Description = "Allow the configured Director to access this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-MON-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/client/myself.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/client/myself.conf
Client {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-fd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""

  # if compatible is set to yes, we are compatible with bacula
  # if set to no, new bareos features are enabled which is the default
  # compatible = yes
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf
Client {
  Name = bareos-fd
  Description = "Client resource of the Director itself."
  Address = localhost
  Password = "BAREOS-FD-PASSWORD"
}

Lets see which files and Bareos components hold which passwords.

root@replica:~ # cd /usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # pwd
/usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # grep -r Password . | sort -k 4 | column -t
./bareos-dir.d/director/bareos-dir.conf:        Password  =  "BAREOS-DIR-PASSWORD"
./bconsole.d/bconsole.conf:                     Password  =  "BAREOS-DIR-PASSWORD"
./bareos-dir.d/client/Default.conf:             Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/bareos-fd.conf:           Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/replica.backup.org.conf:  Password  =  "BAREOS-FD-PASSWORD"
./bareos-fd.d/director/bareos-dir.conf:         Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/console/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-fd.d/director/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-dir.d/storage/File.conf:               Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-dir.conf:         Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-mon.conf:         Password  =  "BAREOS-SD-PASSWORD"

Lets fix the rights after creating all new files.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Bareos WebUI

Now we will add/configure files for the Bareos WebUI interface.

The main Nginx webserver configuration file.

root@replica:~ # cat /usr/local/etc/nginx/nginx.conf
user                 www;
worker_processes     4;
worker_rlimit_nofile 51200;
error_log            /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include           mime.types;
  default_type      application/octet-stream;
  log_format        main '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log        /var/log/nginx/access.log main;
  sendfile          on;
  keepalive_timeout 65;

  server {
    listen       9100;
    server_name  replica.backup.org bareos;
    root         /usr/local/www/bareos-webui/public;

    location / {
      index index.php;
      try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ .php$ {
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_param APPLICATION_ENV production;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
      try_files $uri =404;
    }
  }
}

For the PHP we will modify the bundled config file from package /usr/local/etc/php.ini-production file.

root@replica:~ # cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

root@replica:~ # vi /usr/local/etc/php.ini

We only add the timezone, for my location it is the Europe/Warsaw location.

root@replica:~ # diff -u php.ini-production php.ini
--- php.ini-production  2017-08-12 03:23:36.000000000 +0200
+++ php.ini     2017-09-12 18:50:40.513138000 +0200
@@ -934,6 +934,7 @@
 ; Defines the default timezone used by the date functions
 ; http://php.net/date.timezone
-;date.timezone =
+date.timezone = Europe/Warsaw

 ; http://php.net/date.default-latitude
 ;date.default_latitude = 31.7667

Here is the PHP php-fpm daemon configuration.

root@replica:~ # cat /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
log_level = notice

[www]
user = www
group = www
listen = 127.0.0.1:9000
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = static
pm.max_children = 4
pm.start_servers = 1
pm.min_spare_servers = 0
pm.max_spare_servers = 4
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Rest of the Bareos WebUI configuration.

New /usr/local/etc/bareos/bareos-dir.d/console/admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
Console {
  Name = admin
  Password = ADMIN-PASSWORD
  Profile = webui-admin
}

New /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf
Profile {
  Name = webui-admin
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
  Plugin Options ACL = *all*
}

You may add other directors here as well.

Modified /usr/local/etc/bareos-webui/directors.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/directors.ini
;------------------------------------------------------------------------------
; Section localhost-dir
;------------------------------------------------------------------------------
[replica.backup.org]
enabled = "yes"
diraddress = "replica.backup.org"
dirport = 9101
catalog = "MyCatalog"

Modified /usr/local/etc/bareos-webui/configuration.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/configuration.ini
;------------------------------------------------------------------------------
; SESSION SETTINGS
;------------------------------------------------------------------------------
[session]
timeout=3600

;------------------------------------------------------------------------------
; DASHBOARD SETTINGS
;------------------------------------------------------------------------------
[dashboard]
autorefresh_interval=60000

;------------------------------------------------------------------------------
; TABLE SETTINGS
;------------------------------------------------------------------------------
[tables]
pagination_values=10,25,50,100
pagination_default_value=25
save_previous_state=false

;------------------------------------------------------------------------------
; VARIOUS SETTINGS
;------------------------------------------------------------------------------
[autochanger]
labelpooltype=scratch

Last but not least, we need to set permissions for Bareos WebUI configuration files.

root@replica:~ # chown -R www:www /usr/local/etc/bareos-webui
root@replica:~ # chown -R www:www /usr/local/www/bareos-webui

Logs

Lets create the needed log files and fix their permissions.

root@replica:~ # chown -R bareos:bareos /var/log/bareos
root@replica:~ # :>               /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/nginx

We will now add rules to the newsyslog(8) log rotate daemon, we do not want our filesystem to fill up don’t we?

As newsyslog does cover the *.conf.d directories we will use them instead of modifying the main /etc/newsyslog.conf configuration file.

root@replica:~ # grep conf\\.d /etc/newsyslog.conf
 /etc/newsyslog.conf.d/*
 /usr/local/etc/newsyslog.conf.d/*

root@replica:~ # mkdir -p /usr/local/etc/newsyslog.conf.d

root@replica:~ # cat > /usr/local/etc/newsyslog.conf.d/bareos << __EOF
# BAREOS
/var/log/php-fpm.log             www:www       640  7     100    @T00  J
/var/log/nginx/access.log        www:www       640  7     100    @T00  J
/var/log/nginx/error.log         www:www       640  7     100    @T00  J
/var/log/bareos/bareos.log       bareos:bareos 640  7     100    @T00  J
/var/log/bareos/bareos-audit.log bareos:bareos 640  7     100    @T00  J
__EOF

Lets verify that newsyslog(8) understands out configuration.

root@replica:~ # newsyslog -v | tail -5
/var/log/php-fpm.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/access.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/error.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos-audit.log : --> will trim at Tue May  1 00:00:00 2018

Skel

We now need to create so called Bareos skel files for the rc(8) script to gather all the configuration in one file.

If we do not do that the Bareos services would not stop and we will see an error like that one below.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd onestart
Starting bareos_sd.
27-Apr 02:59 bareos-sd JobId 0: Error: parse_conf.c:580 Failed to read config file "/usr/local/etc/bareos/bareos-sd.conf"
bareos-sd ERROR TERMINATION
parse_conf.c:148 Failed to find config filename.
/usr/local/etc/rc.d/bareos-sd: WARNING: failed to start bareos_sd

Lets create them then …

root@replica:~ # cat > /usr/local/etc/bareos/bareos-dir.conf << __EOF
 @/usr/local/etc/bareos/bareos-dir.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-fd.conf << __EOF
@/usr/local/etc/bareos/bareos-fd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-sd.conf << __EOF
@/usr/local/etc/bareos/bareos-sd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bconsole.conf << __EOF
@/usr/local/etc/bareos/bconsole.d/*
__EOF

… and verify their contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.conf
@/usr/local/etc/bareos/bareos-dir.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.conf
@/usr/local/etc/bareos/bareos-fd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.conf
@/usr/local/etc/bareos/bareos-sd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bconsole.conf
@/usr/local/etc/bareos/bconsole.d/*

After all our modification and added files lefs make sure that /usr/local/etc/bareos dir permissions are properly set.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Its Alive!

Back to our system settings, we will add service start to the main FreeBSD /etc/rc.conf file.

After the modifications our final /etc/rc.conf file will look as follows.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
  bareos_dir_enable=YES
  bareos_sd_enable=YES
  bareos_fd_enable=YES
  php_fpm_enable=YES
  nginx_enable=YES

As PostgreSQL server is already running …

root@replica:~ 	# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 15205)
/usr/local/bin/postgres "-D" "/usr/local/pgsql/data"

… we will now start rest of our Bareos stack services.

First the PHP php-fpm daemon.

root@replica:~ # /usr/local/etc/rc.d/php-fpm start
Performing sanity check on php-fpm configuration:
[27-Apr-2018 02:57:09] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

The Nginx webserver.

root@replica:~ # /usr/local/etc/rc.d/nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Bareos Storage Daemon.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd start
Starting bareos_sd.

Bareos File Daemon also known as Bareos client.

root@replica:~ # /usr/local/etc/rc.d/bareos-fd start
Starting bareos_fd.

… and last but least, the most important daemon of this guide, the Bareos Director.

root@replica:~ # /usr/local/etc/rc.d/bareos-dir start
Starting bareos_dir.

We may now see on what ports our daemons are listening.

root@replica:~ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
bareos   bareos-dir 89823 4  tcp4   *:9101                *:*
root     bareos-fd  73066 3  tcp4   *:9102                *:*
www      nginx      33857 6  tcp4   *:9100                *:*
www      nginx      28675 6  tcp4   *:9100                *:*
www      nginx      20960 6  tcp4   *:9100                *:*
www      nginx      15881 6  tcp4   *:9100                *:*
root     nginx      14388 6  tcp4   *:9100                *:*
www      php-fpm    84047 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    82285 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    80688 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    74735 0  tcp4   127.0.0.1:9000        *:*
root     php-fpm    70518 8  tcp4   127.0.0.1:9000        *:*
bareos   bareos-sd  5151  3  tcp4   *:9103                *:*
pgsql    postgres   20009 4  tcp4   127.0.0.1:5432        *:*
root     sshd       49253 4  tcp4   *:22                  *:*

In case You wandered in what order these services will start, below is the answer from rc(8) subsystem.

root@replica:~ # rcorder /etc/rc.d/* /usr/local/etc/rc.d/* | grep -E '(bareos|php-fpm|nginx|postgresql)'
/usr/local/etc/rc.d/postgresql
/usr/local/etc/rc.d/php-fpm
/usr/local/etc/rc.d/nginx
/usr/local/etc/rc.d/bareos-sd
/usr/local/etc/rc.d/bareos-fd
/usr/local/etc/rc.d/bareos-dir

We can now access http://replica.backup.org:9100 in our browser.

bareos-webui-01

Its indeed alive, we can now login with admin user and ADMIN-PASSWORD password.

bareos-webui-02-dashboard

As we logged in we see empty Bareos dashboard.

Jobs

Now, to make life easier I have prepared two scripts for adding clients to the Bareos server.

The BRONZE-job.sh and BRONZE-sched.sh for generate Bareos files for new jobs and schedules. We will put them into /root/bin dir for convenience.

root@replica:~ # mkdir /root/bin

Both scripts are available below:

After downloading them please rename them accordingly (WordPress limitation).

root@replica:~ # mv BRONZE-sched.sh.key BRONZE-sched.sh
root@replica:~ # mv BRONZE-job.sh.key   BRONZE-job.sh

Lets make them executable.

root@replica:~ # chmod +x /root/bin/BRONZE-sched.sh
root@replica:~ # chmod +x /root/bin/BRONZE-job.sh

Below is ‘help’ message for each of them.

root@replica:~ # /root/bin/BRONZE-sched.sh 
usage: BRONZE-sched.sh GROUP TIME

example:
  BRONZE-sched.sh 01 21:00
root@replica:~ # /root/bin/BRONZE-job.sh
usage: BRONZE-job.sh GROUP TIME CLIENT TYPE

  GROUP option: 01 | 02 | 03
   TIME option: 00:00 - 23:59
 CLIENT option: FQDN
   TYPE option: UNIX | Windows

example:
  BRONZE-job.sh 01 21:00 CLIENT.domain.com UNIX

Client

For the first client we will use the replica.backup.org client – the server itself.

First use the BRONZE-sched.sh to create new scheduler configuration. The script will echo names of the files it created.

root@replica:~ # /root/bin/BRONZE-sched.sh 01 21:00
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-DAILY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-MONTHLY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

We will not use Windows backups for that client in that schedule so we can remove them.

root@replica:~ # rm -f \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

Then use the BRONZE-job.sh to add client and its type to created earlier schedule. Names of the created files will also be echoed to stdout.

root@replica:~ # /root/bin/BRONZE-job.sh 01 21:00 replica.backup.org UNIX
INFO: client DNS check.
INFO: DNS 'A' RECORD: Host replica.backup.org not found: 3(NXDOMAIN)
INFO: DNS 'PTR' RECORD: Host 3\(NXDOMAIN\) not found: 3(NXDOMAIN)
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-DAILY-01-2100-replica.backup.org.conf
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-MONTHLY-01-2100-replica.backup.org.conf

Now we need to reload the Bareos server configuration.

root@replica:~ # echo reload | bconsole
Connecting to Director localhost:9101
1000 OK: replica.backup.org Version: 16.2.7 (09 October 2017)
Enter a period to cancel a command.
reload
reloaded

Lets see how it looks in the browser. We will run that job, then cancel it and then rerun it again.

bareos-webui-03-clients

Client replica.backup.org is configured.

Lets go to Jobs tab to start its backup Job.

bareos-webui-04-jobs

Message that backup Job has started.

bareos-webui-05

We can see it in running state on Jobs tab.

bareos-webui-06

… and on the Dashboard.

bareos-webui-07

We can also display its messages by clicking on its number.

bareos-webui-08

The Jobs tab after cancelling the first Job and starting it again till completion.

bareos-webui-09

… and the Dashboard after these activities.

bareos-webui-10-dashboard

Restore

Lets restore some data, in Bareos its a breeze as its accessed directly in the browser on the Restore tab.

bareos-webui-11-restore

The Restore Job has started.

bareos-webui-12

The Dashboard after restoration.

bareos-webui-13-dashboard

… and Volumes with our precious data.

bareos-webui-14-volumes

Contents of a Volume.

bareos-webui-15-volumes-backups

Status of our Bareos Director.

bareos-webui-16

… and Director Messages, an equivalent of query actlog from IBM TSM or as they call it recently – IBM Spectrum Protect.

bareos-webui-17-messages

… and Bareos Console (bconsole) directly in the browser. Masterpiece!

bareos-webui-18-console

Confirmation about the restored file.

root@replica:~ # ls -l /tmp/bareos-restores/COPYRIGHT 
-r--r--r--  1 root  wheel  6199 Jul 21  2017 /tmp/bareos-restores/COPYRIGHT

root@replica:~ # sha256 /tmp/bareos-restores/COPYRIGHT /COPYRIGHT | column -t
SHA256  (/tmp/bareos-restores/COPYRIGHT)  =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0
SHA256  (/COPYRIGHT)                      =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0

Remote Replica

We have volumes with backup in the /bareos directory, we will now configure rsync(1) to replicate these backups to the /bareos-dr directory, to NFS server in other location.

root@replica:~ # pkg install rsync

The rsync(1) command will look like that.


/usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

We will put that command into the crontab(1) root job.

root@replica:~ # crontab -e

root@replica:~ # crontab -l
0 7 * * * /usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

As all backups have finished before 7:00, the end of backup window, we will start replication by then.

Summary

So we have a configured ready to make backups and restore Bareos Backup Server on a FreeBSD operating system. It can be used as an Appliance on any virtualization platform or also on a physical server with local storage resources without NFS shares.

UPDATE 1 – Die Hard Tribute in 9.2-RC3 Loader

The FreeBSD Developers even made a tribute to the Die Hard movie and actually implemented the Nakatomi Socrates screen in the FreeBSD 9.2-RC3 loader as shown on the images below. Unfortunately it has been removed in later FreeBSD 9.2-RC4 and official FreeBSD 9.2-RELEASE versions.

freebsd-9.2-nakatomi-socrates-01

freebsd-9.2-nakatomi-socrates-02

UPDATE 2

The Bareos Backup Server on FreeBSD article was featured in the BSD Now 254 – Bare the OS episode.

Thanks for mentioning!

UPDATE 3 – Additional Permissions

Thanks to Math user who identified the problem I added this paragraph below in proper place to make the HOWTO complete. Without it many Bareos daemons would not start with permissions error.

Here is the added paragraph.

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

Β 

EOF