Tag Archives: backup

Silent Fanless Dell Wyse 3030 LT FreeBSD Server

Why use quite outdated (released in 2016) Dell Wyse 3030 LT terminal under FreeBSD? There are probably many answers to that question. I recently started to use these for my backup purposes … and to be honest I am really pleased with them.

dell-wyse-3030

In the past I used to make my own boxes based on Mini ITX motherboards such as these:

… and to be honest – I was pretty pleased with their service. The reason that I started to look for alternative and settled on Dell Wyse 3030 LT terminals were Pico ITX power supplies. Both of them failed somewhere between now and 2018. I got replacements from China but then these boxes (without any active cooling) started to overheat. I also tried to replace motherboards with other ones – but still had the same problem. I really wanted to avoid active cooling because it creates a lot of pointless dust. Both inside and outside the case.

My current point of view is that these used Dell Wyse 3030 LT terminals are both cheaper at buy and later at usage costs as they draw a lot less power then the Mini ITX motherboard based ones.

Dell Wyse 3030 LT

The Dell Wyse 3030 LT terminals are not that much powerful – they should be treated as Raspberry Pi boards more then like a casual mini PC. Both because of their power draw and compute power. Here is how such Dell Wyse 3030 LT terminal looks like. I put a 2.5 USB attached hard disk on top of it – its WD Elements with 5 TB capacity. The USB pendrive that is plugged into one of the front USB ports is Lexar S47 32GB USB drive. This is where FreeBSD 13.2-RC6 is installed. I will of course update it when 13.2-RELEASE is ready to rumble.

dell-3030-wd

You may ask why WD Elements and its a good question. The answer is I do not care. I just sort the drives for ‘best price’ order and buy the cheapest one. The other Dell Wyse 3030 LT terminal I got has Seagate Basic Portable 2.5 USB drive instead – also 5 TB in capacity. Works the same well. Cost almost the same. I do not remember which one was better deal but they both were very close.

dell-3030-seagate

Ports

The Dell Wyse 3030 LT terminal has quite a few useful ports that most of them I find useful.

dell-wyse-3030-lt-ports

These are:

  • (1) — 1 x USB 3.0 (front)
  • (2) — 2 x USB 2.0 (front)
  • (3) — 1 x Mini Jack (front)
  • (4) — 1 x LAN 10/100/1000 Realtek RTL8111/8168/8411 (back)
  • (5) — 2 x DisplayPort (back)
  • (6) — 1 x USB 2.0 (back)
  • (*) — 1 x WLAN Intel AC-7265 (internal)

The hardware specs are not that great for today’s standards.

MODEL: Dell Wyse 3030 LT
  CPU: Intel Celeron N2807 2 x 1.58 GHz (2.16 GHz Turbo)
  RAM: 2 GB DDR3L (soldered on motherboard)
 eMMC: 4 GB Flash (soldered on motherboard)
 mPCI: Intel Dual Band Wireless-AC 7265

I use ZFS filesystem for my data and one may say that 2 GB RAM is not enough for ZFS needs … but OpenZFS page recommends 2 GB RAM as a minimum so I do not have to worry here.

Have enough memory: A minimum of 2GB of memory is recommended for ZFS. Additional memory is strongly recommended when the compression and deduplication features are enabled.

The source of that information comes from OpenZFS FAQ page.

In the past I have successfully run a 512 MB box with 2 x 2 TB ZFS mirror for several years. I have had zero crashes/panics/problems with it. Everything just worked like it suppose to. I also run several simple services on the same 512 MB box – like NFS and SAMBA services to export the available data to machines on the local network. The only thing I did back then was to limit the ARC usage up to 128 MB of RAM. Nothing more.

With current 2 GB RAM that I have configured ARC to be 128-256 MB with following settings in the /etc/sysctl.conf file.

# ZFS ARC
# ZFS TUNING
  vfs.zfs.arc.min=134217728
  vfs.zfs.arc.max=268435456

The details about that Intel Celeron N2807 CPU can be found on Intel ARK database.

Here are its specs. The most interesting ones are TDP(W).

N2807

The lscpu(1) shows following information about it.

wyse3030 % lscpu
Architecture:            amd64
Byte Order:              Little Endian
Total CPU(s):            2
Thread(s) per core:      1
Core(s) per socket:      2
Socket(s):               1
Vendor:                  GenuineIntel
CPU family:              6
Model:                   55
Model name:              Intel(R) Celeron(R) CPU  N2807  @ 1.58GHz
Stepping:                8
L1d cache:               24K
L1i cache:               32K
L2 cache:                1024K
Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 cflsh ds acpi mmx fxsr sse sse2 ss htt tm pbe sse3 pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline rdrnd tsc_adjust smep erms fpcsds syscall nx rdtscp lm lahf_lm

The FreeBSD information about it from /var/run/dmesg.boot looks as follows.

CPU: Intel(R) Celeron(R) CPU  N2807  @ 1.58GHz (1583.40-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x30678  Family=0x6  Model=0x37  Stepping=8
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x41d8e3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,TSCDLT,RDRAND>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x101<LAHF,Prefetch>
  Structured Extended Features=0x2282<TSCADJ,SMEP,ERMS,NFPUSG>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics

Several C-states and P-states are properly supported on FreeBSD as shown below.

wyse3030 % sysctl dev.cpu.0
dev.cpu.0.temperature: 44.0C
dev.cpu.0.coretemp.throttle_log: 0
dev.cpu.0.coretemp.tjmax: 100.0C
dev.cpu.0.coretemp.resolution: 1
dev.cpu.0.coretemp.delta: 56
dev.cpu.0.cx_method: C1/hlt C2/io
dev.cpu.0.cx_usage_counters: 21488971 0
dev.cpu.0.cx_usage: 100.00% 0.00% last 409us
dev.cpu.0.cx_lowest: C1
dev.cpu.0.cx_supported: C1/1/1 C2/3/104
dev.cpu.0.freq_levels: 2501/45000 2500/45000 2000/33977 1800/29872 1600/25926 1400/22148 1200/18525 1000/15060 800/11752
dev.cpu.0.freq: 1400
dev.cpu.0.%parent: acpi0
dev.cpu.0.%pnpinfo: _HID=none _UID=0 _CID=none
dev.cpu.0.%location: handle=\_PR_.CPU0
dev.cpu.0.%driver: cpu
dev.cpu.0.%desc: ACPI CPU

Keep in mind to load coretemp(4) module at system startup to get all needed information.

The output from sensors.sh is available below.

wyse3030 # sensors.sh

            BATTERY/AC/TIME/FAN/SPEED
------------------------------------
               dev.cpu.0.cx_supported: C1/1/1 C2/2/500 C3/3/1500
                   dev.cpu.0.cx_usage: 26.04% 12.27% 61.68% last 125us
                       dev.cpu.0.freq: 498
                hw.acpi.cpu.cx_lowest: C8
                            powerd(8): running

                  SYSTEM/TEMPERATURES
------------------------------------
                dev.cpu.0.temperature: 37.0C (max: 105.0C)
                dev.cpu.1.temperature: 40.0C (max: 105.0C)
      hw.acpi.thermal.tz0.temperature: 26.9C (max: 90.1C)

                   DISKS/TEMPERATURES
------------------------------------
        smart.da1.temperature_celsius: 37.0C

The biggest drawback is the lack of support for aesni(4) acceleration. Test I had made – with powerd(8) enabled – shows that its able to reach 15-16 MB/s of speed with calculating that encryption on these two cores. Its more then enough for my needs as I rarely go above 10-11 MB/s on the WiFi transfers.

If you are interested how the internals of Dell Wyse 3030 LT motherboard looks like – take a journey to David Parkinson page at Dell Wyse 3030 LT location. You will also find a lot information about other Thin Client software – often also called Terminals.

I did not bothered to unscrew mine knowing what David Parkinson shows on his site.

WiFi

As the Dell Wyse 3030 LT has the Intel Dual Band Wireless-AC 7265 WiFi card from 2014 – almost a decade old WiFi card – there is a big chance that FreeBSD will flawlessly support it and at least allow 802.11n protocol and speed – as its 9 years old.

Here are some specs from Intel on that WiFi card.

AC7265

… and here is the neat part πŸ˜€ – it does not work on FreeBSD – its not able to connect to WiFi access point – what were you thinking πŸ™‚

This is what I tried at both 2.4GHz and 5.0GHz WiFi networks that are WPA2 protected.

wyse3030 # grep iwm /var/run/dmesg.boot 
iwm0: <Intel(R) Dual Band Wireless AC 7265> mem 0x88500000-0x88501fff at device 0.0 on pci2
iwm0: hw rev 0x210, fw ver 22.361476.0, address d0:57:7b:13:41:4c

wyse3030 # pciconf -lv iwm0
iwm0@pci0:2:0:0: class=0x028000 rev=0x59 hdr=0x00 vendor=0x8086 device=0x095a subvendor=0x8086 subdevice=0x5410
vendor = 'Intel Corporation'
device = 'Wireless 7265'
class = network

wyse3030 #
sysctl -n net.wlan.devices iwm0
wyse3030 #
ifconfig wlan0 create wlandev $( !! )
wyse3030 #
ifconfig wlan0 up
wyse3030 #
ifconfig wlan0 scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS wifinet d8:07:b6:aa:bb:00 7 54M -63:-96 100 EP APCHANREP WPA RSN BSSLOAD HTCAP VHTCAP VHTOPMODE
wifinet d8:07:b6:aa:bb:02 40 54M -80:-96 100 EP VHTPWRENV APCHANREP WPA RSN HTCAP VHTCAP

These are my WiFi networks that I will try to use:

  • WiFi 2.4GHz – wifinetd8:07:b6:aa:bb:00
  • WiFi 5.0GHz – wifinetd8:07:b6:aa:bb:02

Results below.

WiFi 2.4GHz

wyse3030 # wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
Successfully initialized wpa_supplicant
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
ioctl[SIOCS80211, op=103, val=0, arg_len=128]: Operation now in progress
wlan0: CTRL-EVENT-SCAN-FAILED ret=-1 retry=1
wlan0: Trying to associate with d8:07:b6:aa:bb:00 (SSID='wifinet' freq=2442 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:00 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:00 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:00 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:00 (SSID='wifinet' freq=2442 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:00 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:00 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:00 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:00 (SSID='wifinet' freq=2442 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:00 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:00 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:00 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:00 (SSID='wifinet' freq=2442 MHz)
^C

WiFi 5.0GHz

wyse3030 # wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
Successfully initialized wpa_supplicant
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
wlan0: Trying to associate with d8:07:b6:aa:bb:02 (SSID='wifinet' freq=5200 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:02 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:02 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:02 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:02 (SSID='wifinet' freq=5200 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:02 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:02 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:02 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:02 (SSID='wifinet' freq=5200 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:02 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:02 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:02 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: Trying to associate with d8:07:b6:aa:bb:02 (SSID='wifinet' freq=5200 MHz)
wlan0: Authentication with d8:07:b6:aa:bb:02 timed out.
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:02 reason=3 locally_generated=1
BSSID d8:07:b6:aa:bb:02 ignore list count incremented to 2, ignoring for 10 seconds
wlan0: CTRL-EVENT-SSID-TEMP-DISABLED id=0 ssid="wifinet" auth_failures=1 duration=10 reason=CONN_FAILED
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: CTRL-EVENT-SSID-REENABLED id=0 ssid="wifinet"
wlan0: Trying to associate with d8:07:b6:aa:bb:02 (SSID='wifinet' freq=5200 MHz)
wlan0: CTRL-EVENT-DISCONNECTED bssid=d8:07:b6:aa:bb:02 reason=3 locally_generated=1
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: CTRL-EVENT-DSCP-POLICY clear_all
wlan0: CTRL-EVENT-TERMINATING 
^C

… and to be fair – I have used latest and greatest FreeBSD 13.2-RC6 (the 13.2-RELEASE is probably available now). I even installed the comms/iwmbt-firmware package hoping it would solve anything – it does not change anything.

For the record – this is how properly working WiFi connection looks like on FreeBSD.

# ifconfig wlan0 up 
# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
Successfully initialized wpa_supplicant
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
ioctl[SIOCS80211, op=20, val=0, arg_len=7]: Invalid argument
wlan0: Trying to associate with d8:07:b6:aa:bb:00 (SSID='wifinet' freq=2442 MHz)
wlan0: Associated with d8:07:b6:aa:bb:00
wlan0: WPA: Key negotiation completed with d8:07:b6:aa:bb:00 [PTK=CCMP GTK=CCMP]
wlan0: CTRL-EVENT-CONNECTED - Connection to d8:07:b6:aa:bb:00 completed [id=43 id_str=]

At this point (after you get the CTRL-EVENT-CONNECTED message) you hit [CTRL]+[Z] then type bg at the prompt to keep wpa_supplicant(8) process in the background – and then dhclient wlan0 to get the IP address, default gateway and DNS servers.

The best FreeBSD can get on that box when it comes to WiFi side is Realtek RTL8188CUS tiny USB dongle … but while its 802.11n capable – the FreeBSD is only able to support 802.11g speeds on it. Details here – Realtek RTL8188CUS – USB 802.11n WiFi Review – in one of my older articles.

You now probably understand why I have these two WiFi antennas detached πŸ™‚ – but for the record – I have tested the WiFi connection with both antennas attached to that Intel AC 7265 card.

LAN

While WiFi on FreeBSD sucks greatly the builtin LAN – Realtek RTL8111/8168/8411 – works quite well with re(4) driver … but it needs one spacial quirk to work πŸ™‚

All network connections need to be restarted at the end of the boot process to work – if you do not do that – you just do not have network connectivity at all – I was not able to track why its fucked up like that – so I went with the idea to use /etc/rc.local scripts as its already there and its really late in the boot process. Here are mine /etc/rc.local script contents that seem to workaround that fuckup well.

wyse3030 % cat /etc/rc.local
#! /bin/sh

# RESTART NETWORKING
  (
  service netif   restart 1> /dev/null 2> /dev/null
  service routing restart 1> /dev/null 2> /dev/null
  ) &

To not waste time I put both netif and routing services in the background for startup.

DisplayPort

The DisplayPort sockets just work – when attached to the 1080p monitor they just deliver. I tried only in the vt(4) console mode. I did not tried X11 mode – but I suspect that the CPU is so old that it will work flawlessly with i915kms.ko driver from graphics/drm-fbsd13-kmod package.

When it comes to early booting – set your native screen resolution in the /boot/loader.conf file as specified below.

# CONSOLE RESOLUTION # --------------------------------------------------------
  kern.vt.fb.default.mode=1920x1080    # FORCE FHD
  efi_max_resolution=1920x1080         # FORCE FHD

Also – if you (like me) like to have small/smaller font size in the terminal – this will also be helpful in the /boot/loader.conf file.

  screen.font=6x12          # USE SMALLEST FONT NO MATTER THE RESOLUTION

USB

The USB ports work as desired. Both USB 2.0 and USB 3.0 ports work without any issues.

I currently use Lexar S47 32 GB USB tiny dongle foe FreeBSD system. I did not yet tested the builtin eMMC 4 GB flash storage space – will probably add some UPDATE about it in the future.

Data

Lets get back to the main responsibility of that box – backups. In the past my main NAS system used to have two 5 TB disks in ZFS mirror – but as I have another offsite copy and another offline copy and I also have my main copy always with me – I now find it pointless to have this single NAS mirrored. Besides – its more then 3-2-1 backup rule anyway …

As this box has 4 USB ports one can of course even create raidz ZFS pool with 3 drives for data and 1 for parity. Nothing stops you from doing that. You will get 15 TB of usable storage that way … not sure how log would it take to rebuild through. As these are SMR drives you may consider raidz2 or mirror instead. I also have some tips for ZFS on SMR Drives here.

Costs

This is the neat part … and I am not joking here. My earlier setups cost about $110 to $120 only for the case/motherboard/PSUs/RAM/etc. You needed to add disk costs to that on top. But not with this one. I got several of these Dell Wyse 3030 LT terminals for less then $18 each … and it was not a one time offer – they just cost that much now.

The disks (5 TB 2.5 USB drives) that I use also got cheaper in time and its relatively easy to get a used one for about $80. Does not matter if its Seagate or Western Digital or whatever. They are very similar cheap 5400 rpm SMR drives.

This make my new backup solution cost about $100 for a 5 TB space of storage.

Configuration

This is the main FreeBSD config /etc/rc.conf file.

wyse3030 % cat /etc/rc.conf
# BOOT/SILENCE # -------------------------------------------------------------
  rc_startmsgs=NO
  rc_info=NO

# NETWORK # ------------------------------------------------------------------
  hostname=wyse3030
  ifconfig_re0="inet 10.0.0.2/24"
  defaultrouter="10.0.0.1"
  defaultroute_delay=3
  defaultroute_carrier_delay=3
  gateway_enable=YES

# MODULES/COMMON/BASE # ------------------------------------------------------
  kld_list="${kld_list} fusefs coretemp sem cpuctl ichsmb"

# POWER # --------------------------------------------------------------------
  performance_cx_lowest=Cmax
  economy_cx_lowest=Cmax
  powerd_enable=YES
  powerd_flags="-n adaptive -a hiadaptive -b adaptive -m 100 -M 2000"

# DAEMONS # ------------------------------------------------------------------
  syslogd_flags="-ss"
  sshd_enable=YES
  zfs_enable=YES
  sendmail_enable=NO
  sendmail_submit_enable=YES
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER # --------------------------------------------------------------------
  keyrate=fast
  virecover_enable=NO
  update_motd=NO
  hostid_enable=NO
  entropy_file=NO
  savecore_enable=NO
  clear_tmp_enable=YES
  dumpdev=AUTO

Options that can be changed anytime at /etc/sysctl.conf file below.

wyse3030 % cat /etc/sysctl.conf
# SECURITY # ------------------------------------------------------------------
  security.bsd.see_jail_proc=0
  security.bsd.unprivileged_proc_debug=0

# SECURITY/RANDOM PID # -------------------------------------------------------
  kern.randompid=1

# ANNOYING THINGS # -----------------------------------------------------------
  vfs.usermount=1
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# ZFS ASHIFT 4k BLOCK SIZE # --------------------------------------------------
  vfs.zfs.min_auto_ashift=12

# ZFS DELETE FUCKUP TRIM # ----------------------------------------------------
  vfs.zfs.vdev.trim_max_active=1

# ZFS ARC 128MB FORCE # -------------------------------------------------------
  vfs.zfs.arc.min=134217728
  vfs.zfs.arc.max=268435456

# JAILS/ALLOW UPGRADES IN JAILS # ---------------------------------------------
  security.jail.chflags_allowed=1

# JAILS/ALLOW RAW SOCKETS # ---------------------------------------------------
  security.jail.allow_raw_sockets=1

# ALLOW idprio(8) USE BY REGULAR USER # ---------------------------------------
  security.bsd.unprivileged_idprio=1

… and last but not least (actually first :> to be read) the /boot/loader.conf file.

wyse3030 % cat /boot/loader.conf
# CONSOLE COMMON # ------------------------------------------------------------
  autoboot_delay=2          # USE '-1' => NO WAIT | USE 'NO' => INFINITE WAIT
  loader_logo=none          # LOGO POSIBILITES: fbsdbw beastiebw beastie none
  loader_menu_frame=none    # REMOVE FRAMES FROM FreeBSD BOOT MENU
  screen.font=6x12          # USE SMALLEST FONT NO MATTER THE RESOLUTION

# CONSOLE RESOLUTION # --------------------------------------------------------
  kern.vt.fb.default.mode=1920x1080    # FORCE FHD
  efi_max_resolution=1920x1080         # FORCE FHD

# MODULES # -------------------------------------------------------------------
  geom_eli_load=YES
  cryptodev_load=YES
  zfs_load=YES

# DISABLE /dev/diskid/* AND /dev/gptid/* ENTRIES FOR DISKS # ------------------
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# COLORS # --------------------------------------------------------------------
  kern.vt.color.0.rgb="#000000"
  kern.vt.color.1.rgb="#dc322f"
  kern.vt.color.2.rgb="#859900"
  kern.vt.color.3.rgb="#b58900"
  kern.vt.color.4.rgb="#268bd2"
  kern.vt.color.5.rgb="#ec0048"
  kern.vt.color.6.rgb="#2aa198"
  kern.vt.color.7.rgb="#94a3a5"
  kern.vt.color.8.rgb="#586e75"
  kern.vt.color.9.rgb="#cb4b16"
  kern.vt.color.10.rgb="#859900"
  kern.vt.color.11.rgb="#b58900"
  kern.vt.color.12.rgb="#268bd2"
  kern.vt.color.13.rgb="#d33682"
  kern.vt.color.14.rgb="#2aa198"
  kern.vt.color.15.rgb="#6c71c4"

# RACCT/RCTL RESOURCE LIMITS # ------------------------------------------------
  kern.racct.enable=1

# ZFS TUNING # ----------------------------------------------------------------
  vfs.zfs.prefetch_disable=1

# POWER MANAGEMENT POWER OFF DEVICES WITHOUT ATTACHED DRIVER # ----------------
  hw.pci.do_power_nodriver=3

# POWER MANAGEMENT FOR EVERY USED AHCI CHANNEL (ahcich 0-7) # -----------------
  hint.ahcich.0.pm_level=5
  hint.ahcich.1.pm_level=5
  hint.ahcich.2.pm_level=5
  hint.ahcich.3.pm_level=5
  hint.ahcich.4.pm_level=5
  hint.ahcich.5.pm_level=5
  hint.ahcich.6.pm_level=5
  hint.ahcich.7.pm_level=5

# DISABLE DESTRUCTIVE DTrace # ------------------------------------------------
  security.bsd.allow_destructive_dtrace=0

There is also this 5 TB disk drive configuration to be mentioned.

wyse3030 # gpart destroy -F da0
wyse3030 # dd if=/dev/zero of=/dev/da0 bs=1m status=progress count=10
wyse3030 # gpart create -s GPT da0
wyse3030 # gpart add -t freebsd-zfs -a 4k da0
wyse3030 # geli init -s 4096 da0p1
wyse3030 # geli attach da0p1

Why I use password based encryption and not based on key? If someone would steal that device then with attached key on – for example – USB drive – all the data would be accessible – and all encryption and effort would be lost. With password based encryption whoever steals the devices gets some nice /dev/random input from my disk because its encrypted.

In the end these boxes do not hang by themselves and when there is power outage – its not that common like twice a week. Its more like once a quarter – I do not mind typing my password once in a quarter to make my data safe from people that should not be allowed to access it.

ZFS Pool Configuration

I did not used anything spectacular for ZFS configuration – just disabled atime and enabled compression that would save some space. I also increased the recordsize to 1M so the compressratio would be a little better … but nothing more to be honest.

wyse3030 # zpool history wd
History for 'wd':
2023-04-05.14:46:01 zpool create wd /dev/da0p1.eli
2023-04-05.14:46:07 zfs set compression=zstd wd
2023-04-05.14:46:12 zfs set atime=off wd
2023-04-05.14:46:15 zfs set recordsize=1m wd
2023-04-05.14:51:46 zfs create wd/data
2023-04-05.14:51:50 zfs set mountpoint=none wd
2023-04-05.14:52:00 zfs set mountpoint=/data wd/data

As you probably guessed the wd name is for the Western Digital 5 TB drive. If you are curious (and you probably are) the second ZFS pool is called seagate πŸ™‚

Last time I used the LZ4 compression for the ZFS pool. Not I use the ZSTD compression and the compressratio improved from 1.04 to 1.07 – almost twice more efficient.

wyse3030 % zfs get compressratio
NAME                                      PROPERTY       VALUE  SOURCE
wd                                        compressratio  1.07x  -
wd/data                                   compressratio  1.07x  -

One may think that 7% of more data space is almost ‘non-existent’ – think again. When it comes to terabytes it makes a big difference. In my case when my real data takes 4.2 TB and compressed one takes 3.9 TB its … 245 GB of space. For free. Thank you ZFS.

Quirks

The ‘quirks’ are things that need to be done differently then usual to make the device ‘happy’ – to make it work the way we want.

One of the Dell Wyse 3030 LT quirks is its bot process … it does not need any additional configuration – I just want to mention it – so you will not be stressed about it. After the FreeBSD loader(8) menu there is slight pause before continuing into the ‘rest’ of the boot process. Its several seconds long and its harmless. Here is how it looks on attached screen.

freebsd-bootloader

Power Usage Comparison

So its dirt cheap … and takes HALF of the power then my previews Mini ITX solution. The Mini ITX based systems used about 10.4 W of power when at idle state.

After measuring the Dell Wyse 3030 LT terminal power consumption with a kill-a-watt physical device – its only about 4.5 W only … and this is with % TB of 2.5 disk attached at idle state.

Another interesting fact is that I used the same ‘external’ power supply that I used for these Mini ITX boxes.

power-supply

Its 2.5A at 12V (which means 30 W)- exactly what a Dell Wyse 3030 LT terminal needs – according to its specification. As you can see from the photo above – the official power supply – while quite similar in size – needs a lot more space because of the additional 3-pin power cord – which is quite thick and takes a lot more space.

As in the past – I used python(1) [1] processes to load the CPU and dd(8) commands to load the drives I/O subsystem.

[1] # echo '999999999999999999 ** 999999999999999999' | python
[2] # dd if=/dev/da0  of=/dev/null  bs=1M
[3] # dd if=/dev/zero of=/data/FILE bs=1M

I will also include older Mini ITX based system measurements.

BOX        POWER  CPU          I/O
Wyse3030   3.8 W  IDLE         IDLE
MiniITX   10.5 W  IDLE         IDLE


BOX        POWER  CPU          I/O
Wyse3030  10.3 W  8 Thread(s)  3 DISK READ Thread(s) + 3 DISK WRITE Thread(s)
MiniITX   17.8 W  8 Thread(s)  3 DISK READ Thread(s) + 3 DISK WRITE Thread(s)

So after cutting CAPEX costs from about $110 to $15 I also lowered the OPEX price in half.

What can I say … I really appreciate and like the price/performance ratio πŸ™‚

Cloud Storage Costs Comparison

I am not a fan of cloud based storage – its just damn overpriced to say the least.

In the past I have made multiple calculations about cloud storage costs and it will be no different this time. We need to assume some cost of 1kWh of power. In Poland its about $0.25 for 1kWh which means running a device that draws 1000 Watts for entire hour.

Assuming that our Dell Wyse 3030 LT uses about 4.5W all the time it would make the following costs.

  COST  TIME 
 $0.03  1 DAY(s)
 $9.96  1 YEAR(s)
$29.88  3 YEAR(s)
$49.80  5 YEAR(s)

Combining that with server cost ($100) we get TCO for our self hosted 5TB storage service.

COST  TIME
$110  1 YEAR(s)
$130  3 YEAR(s)
$150  5 YEAR(s)

This time after searching for cheapest cloud based storage I found these services.

  • Amazon Drive
  • Amazon S3 Glacier Storage
  • Backblaze B2 Cloud Storage
  • Google One
  • Dropbox Basic

Here is its cost summarized for 1 year period for 5TB of data.

PRICE  TIME       SERVICE
 $300  1 YEAR(s)  Amazon Drive
 $310  1 YEAR(s)  Google One
 $240  1 YEAR(s)  Amazon S3 Glacier Storage
 $450  1 YEAR(s)  Backblaze B2 Cloud Storage
 $360  1 YEAR(s)  Dropbox Basic

For the Backblaze B2 Cloud Storage I assumed average between upload/download price because upload is two times cheaper then download.

Here is its cost summarized for 5 year period for 5 TB of data.

PRICE  TIME       SERVICE
 $150  5 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive
 $230  5 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive (assuming drive failed)
$1500  5 YEAR(s)  Amazon Drive
$1550  5 YEAR(s)  Google One
$1200  5 YEAR(s)  Amazon S3 Glacier Storage
$2250  5 YEAR(s)  Backblaze B2 Cloud Storage
$1800  5 YEAR(s)  Dropbox Basic

… but there is also a catch here. Recently I was able to get a lifetime offer for 1 TB of cloud storage. It cost me about $170 for that 1 TB.

This is where it gets interesting – for 5 TB we would have to pay $850 for lifetime cloud storage. Lets calculate where the ‘personal’ NAS is comparable to ‘lifetime’ cloud storage.


PRICE   TIME       SERVICE
 $230   5 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive => assuming drive failed 1 time(s)
 $460  10 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive => assuming drive failed 2 time(s)
 $690  15 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive => assuming drive failed 3 time(s)
 $920  20 YEAR(s)  Dell Wyse 3030 LT + 5 TB USB 2.5 Drive => assuming drive failed 4 time(s)
 $850  UNLIMITED   {5TB unlimited time cloud vendor}

So … it will take you about 20 years of your life (and an assumption that a drive will fail every 5 years with 4 drive fails total) to make it less expensive then the unlimited cloud storage. To be honest – I would get such unlimited cloud storage for most important files and documents for example in 1 TB or less size. Keep an encrypted backup copy the most important files and you are doing better then the classic 3-2-1 backup rule … assuming that you already have one online and one offline backup and also one onsite backup and one offsite backup … and at least two different medium types used πŸ™‚

Drawbacks

WiFi is not supported and network requires restart after boot. Besides these I did not find any other issues.

ZFS Boot Environments Revolutions

I do not have to remind you that I am a big fan of ZFS Boot Environments feature. From the time when I first used it on OpenSolaris and Solaris systems I was really fascinated by it. Bulletproof upgrades and changes to entire system … and it was possible more then decade ago. Like a Dream. Today with beadm(8) and bectl(8) tools and also the FreeBSD loader(8) the ZFS Boot Environments are first class and one of the main features of the FreeBSD operating system.

Back in the more ‘normal’ times (before C19) I was able to talk two times about ZFS Boot Environments. I hope I explained them well.

  • 1st in Poland at PBUG meeting – with presentation available HERE.
  • 2nd in Holland at NLUUG conference – with presentation available HERE.

I do not know any downsides of ZFS Boot Environments but if you would stick a gun into my head and make me find one – I would say that you still have to reboot(8) to change to the other BE. This is about to change …

Reroot Instead Reboot

What is reroot? Its the ability to switch to other root filesystem without the need for full system reboot. The loaded and running kernel stays the same of course – but this is the only downside. This feature is implemented in the reboot(8) command with -r argument.

As we can read in the FreeBSD 10.3-RELEASE Release Notes page:

The initial implementation of “reroot” support has been added to the reboot(8) utility, allowing the root filesystem to be mounted from a temporary source filesystem without requiring a full system reboot. (r293744) (Sponsored by The FreeBSD Foundation)

How can reroot be useful here? It will save you a lot of time when you did not updated the kernel. There are two types of update strategies when using the ZFS Boot Environments. You can create new BE (as a backup world that you can get back to) and update the running system. Then you can use checkrestart(1) to verify which processes should be restarted because either binaries or libraries has been updated.

checkrestart.1

The other way was to create new separate BE (while not touching the running one) and then mount it and update that new BE and reboot into it later. This created a need to reboot(8) but not anymore. Especially when you just update the packages with pkg(8) command.

beadm(8)

With the new reroot option of beadm(8) you will tell FreeBSD to reroot your running kernel into specified BE. It will definitely have less impact in virtual machines as they reboot quite fast but imagine saved time on a server class physical machine with about 10 minutes lost for BIOS POST messages and initialization … or personal desktop/laptop GELI encrypted system without the need to type in again the GELI password to decrypt it after reboot.

beadm.reroot

On the screenshot above I use the latest FreeBSD 13.1-BETA1 but it works the same on other production FreeBSD releases such as 12.3-RELEASE or 13.0-RELEASE. The new upgraded beadm(8) is available from its home at GitHub page here:

I will add that updated version to the FreeBSD Ports tree later along with updated man page later.

Usage

Usage of this new feature is quite simple. You type beadm reroot BENAME in the terminal and FreeBSD reroots into that BE without reboot. Takes about 9-10 seconds on my 11 years old ThinkPad W520 so it may be even faster on your more up to date system.

# beadm list
BE        Active Mountpoint  Space Created
12.3      -      -            9.5G 2021-10-18 13:14
13.0.p6   -      -           13.9G 2022-01-27 11:07
13.0      -      -           12.9G 2022-03-05 15:02
13.0.safe -      -            2.8M 2022-03-08 14:54
13.1      NR     /            9.5G 2022-03-12 00:18
13.1.safe -      -          544.0M 2022-03-13 23:18

# beadm activate 13.1.safe
Activated successfully

# beadm reroot 13.1.safe

… and you are going the route similar to typing shutdown now on a running system. All services are stopped. Then root is changed to new one. Then system continues to boot along with starting all its services as usual. Just without the BIOS POST and the bootloader and kernel parts.

The reroot feature is especially useful in one of these scenarios:

From what I know the bectl(8) does not has that reroot feature but maybe it will be added to it somewhere in the future.

Summary

Not sure that ZFS Boot Environments Revolutions is the best title for this blog post, but as I used Reloaded on my 2nd ZFS Boot Environments presentation I though that sticking to The Matrix (1999) schema. I could of course do 3rd and updated presentation … but I am afraid that it will not happen … or at least not soon.

I did not thought that the FreeBSD Enterprise Storage presentation that I gave at 2020/02 PBUG would be my last – it was more then 2 years ago.

UPDATE 1 – Faster Upgrade with New beadm(8) Version

Today (2022/05/06) I introduced new beadm(8) version 1.3.5 that comes with new chroot(8) feature. It has already been committed to the FreeBSD Ports tree under 263805 PR so expect packages being available soon.

You can also update beadm(8) directly like that:

# fetch -o /usr/local/sbin/beadm https://raw.githubusercontent.com/vermaden/beadm

Now for the faster update process – here are the instructions depending on the shell you use.

  • ZSH / CSH
# beadm create 13.1-RC6
# beadm chroot 13.1-RC6
BE # zsh || csh
BE # yes | freebsd-update upgrade -r 13.1-RC6
BE # repeat 3 freebsd-update install
BE # exit
# beadm activate 13.1-RC6
# reboot
  • SH / BASH / FISH / KSH
# beadm create 13.1-RC6
# beadm chroot 13.1-RC6
BE # sh || bash || fish || ksh
BE # yes | freebsd-update upgrade -r 13.1-RC6
BE # seq 3 | xargs -I- freebsd-update install
BE # exit
# beadm activate 13.1-RC6
# reboot

Happy upgrading πŸ™‚

EOF

UFS Boot Environments

Yes you read it correctly. The fabulous ZFS Boot Environments – more about them here – https://is.gd/BECTL – if you are not familiar with this concept – are now also possible on UFS filesystems on FreeBSD. Of course in little different form and without using snapshots and clones but the idea and solution remains. You can now have bootable backups of your system before major changes and/or upgrades. This solution does not use UFS snapshots. All bootable UFS variants are supported with and without Soft Updates or Soft Updates Journaling. The idea behind UFS Boot Environments lays in several additional root (/) partitions that will be used as alternate boot environments.

If you are interested in ARM more then in X86 then also check UFS Boot Environments for ARM article.

Concept is similar to Solaris Live Upgrade mechanism which used lucreate/luupgrade/lustatus commands and also to AIX Alternate Disk Cloning and Install with alt_disk_copy/alt_disk_install commands.

In this article I will show you how to setup new FreeBSD system with 3 of such partitions. In my honest opinion its more then enough for most purposes. On my desktop/workstation I have more then 1000 packages installed. With FreeBSD Base System it takes about 11 GB of space with ZFS compression and 15 GB without it. Thus I propose 16 GB partitions. Your needs may of course be different. You may as well create 4 GB or 64 GB partitions.

The UFS Boot Environments would not exist without the inspiration from FreeBSD Upgrade Procedure Using GPT blog post by Mariusz Zaborski (also known as oshogbo) who describes the concept of bootme flags for GPT partitions. That is the heart of this solution. By selecting activate for boot environment the bootme flag is removed from all existing boot environments and set for the new desired one. The ufsbe(8) tool was tested on FreeBSD 12.x and 13.x currently.

FreeBSD Install for UFS Boot Environments

Generally only GPT partitioning is needed to use UFS Boot Environments. Below I will show example install process with 3 root partitions of 16 GB each.

In the FreeBSD Installer select Install.

The select Auto (UFS) option.

Then use Entire Disk option.

Then select GPT partition table.

The FreeBSD Installer will propose the following solution.

Change it into 3 partitions 16 GB each to make it look like that one below and hit Finish.

Then Commit your choice.

… and then the install process will continue as usual.

Besides these option you may select whatever you choose in the install process.

After the system reboots its gpart(8) will look like that one below.

root@fbsd13:~ # gpart show
=>       40  134217648  ada0  GPT  (64G)
         40       1024     1  freebsd-boot  (512K)
       1064    2097152     2  freebsd-swap  (1.0G)
    2098216   33554432     3  freebsd-ufs  (16G)
   35652648   33554432     4  freebsd-ufs  (16G)
   69207080   33554432     5  freebsd-ufs  (16G)
  102761512   31456176        - free -  (15G)

Now fetch(1) the ufsbe.sh script from its GitHub page.

# fetch https://raw.githubusercontent.com/vermaden/ufsbe/main/ufsbe.sh
# chmod +x ./ufsbe.sh
# ./ufsbe.sh

NOPE: did not found boot environment setup with 'ufsbe' label

INFO: setup each boot environment partition with appropriate label

HELP: list all 'freebsd-ufs' partitions type:

  # gpart show -p | grep freebsd-ufs
      2098216   33554432  ada0p3  freebsd-ufs  [bootme]  (16G)
     35652648   33554432  ada0p4  freebsd-ufs  (16G)
     69207080   33554432  ada0p5  freebsd-ufs  (16G)

HELP: to setup partitions 3/4/5 as boot environments type:

  # gpart modify -i 3 -l ufsbe/3 ada0
  # gpart modify -i 4 -l ufsbe/4 ada0
  # gpart modify -i 5 -l ufsbe/5 ada0

It will welcome you with information about needed setup steps.

We will now make these steps marking all boot environment partitions with appropriate ufsbe labels.

# gpart modify -i 3 -l ufsbe/3 ada0
ada0p3 modified
# gpart modify -i 4 -l ufsbe/4 ada0
ada0p4 modified
# gpart modify -i 5 -l ufsbe/5 ada0
ada0p5 modified

Now ufsbe.sh will setup bootme flag for currently used root (/) partition.

# ./ufsbe.sh
INFO: flag 'bootme' successfully set on / filesystem
usage:
  ufsbe.sh list
  ufsbe.sh activate
  ufsbe.sh sync  

Setup is complete.

All three root partitions have the ufsbe label. To make it more simple the /dev/ada0p3 device gets the ufsbe/3 label and /dev/ada0p4 device gets the ufsbe/4 … you see the pattern.

# gpart show -p -l
=>       40  134217648    ada0  GPT  (64G)
         40       1024  ada0p1  (null)  (512K)
       1064    2097152  ada0p2  swap  (1.0G)
    2098216   33554432  ada0p3  ufsbe/3  [bootme]  (16G)
   35652648   33554432  ada0p4  ufsbe/4  (16G)
   69207080   33554432  ada0p5  ufsbe/5  (16G)
  102761512   31456176          - free -  (15G)

You can now use our UFS Boot Environments on this system.

Using UFS Boot Environments

Lets list our boot environments with list command. The short ‘l‘ option also works.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      NR  
ada0p4   ufsbe/4      -   
ada0p5   ufsbe/5      -  

Its output is similar to mine ZFS Boot Environments tools beadm(8). The N flag shows that this is the boot environments we are using NOW. The R flag shows which one we will use after the reboot(8).

Currently only the 3 boot environments is populated (by FreeBSD Installer that is). The 4 and 5 boot environments are empty filesystems.

You can either extract your own FreeBSD version there with base.txz and kernel.txz or use the sync option of ufsbe.sh which will use rsync(1) for the process. Below is an example of syncing boot environment 3 (the one we installed) with currently empty boot environment 4.

# ./ufsbe.sh sync 3 4
NOPE: rsync(1) is not available in ${PATH}
INFO: install 'net/rsync' package or port

# pkg install net/rsync

# ./ufsbe.sh sync 3 4
INFO: syncing '3' (source) => '4' (target) boot environments ...
INFO: boot environments '3' (source) => '4' (target) synced

You can now see that boot environment 3 and 4 have same size.

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ada0p3     15G    1.3G     13G     9%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/ada0p4     15G    1.2G     13G     9%    /ufsbe/4
/dev/ada0p5     15G     32M     14G     0%    /ufsbe/5

If we would like to activate an empty boot environment 5 then ufsbe.sh will not let us do that because that will make our system unbootable. Of course is quote fast/naive check but at least makes sure some files exists on the soon to be active boot environment. Currently these files are checked but this list may be increased in the future:

  • /boot/kernel/kernel
  • /boot/loader.conf
  • /etc/rc.conf
  • /rescue/ls
  • /bin/ls
  • /sbin/fsck
  • /usr/bin/su
  • /usr/sbin/chroot
  • /lib/libc.so.*
  • /usr/lib/libpam.so.*

Below this ‘protection’ in action.

# ./ufsbe.sh activate 5
NOPE: boot environment '5' is not complete
INFO: critical file '/ufsbe/5/boot/kernel/kernel' is missing
INFO: use 'sync' option or copy file manually

The boot environment 4 activation process works as desired as we populated it with files from boot environment 3 first.

# ./ufsbe.sh activate 4
INFO: boot environment '4' now activated

Same as with beadm(8) the ufsbe.sh also checks if boot environment is already activated.

# ./ufsbe.sh activate 4
INFO: boot environment '4' is already active

The list of our boot environments looks like that now.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      N   
ada0p4   ufsbe/4      R   
ada0p5   ufsbe/5      -   

… and that is how output of gpart(8) looks like.

# gpart show -p -l
=>       40  134217648    ada0  GPT  (64G)
         40       1024  ada0p1  (null)  (512K)
       1064    2097152  ada0p2  swap  (1.0G)
    2098216   33554432  ada0p3  ufsbe/3  (16G)
   35652648   33554432  ada0p4  ufsbe/4  [bootme]  (16G)
   69207080   33554432  ada0p5  ufsbe/5  (16G)
  102761512   31456176          - free -  (15G)

We will now reboot into the activated boot environment 4.

# shutdown -r now

After the reboot(8) we see that we are now booted from the 4 boot environment.

# ./ufsbe.sh list
PROVIDER LABEL        ACTIVE
ada0p3   ufsbe/3      -   
ada0p4   ufsbe/4      NR  
ada0p5   ufsbe/5      -   

Closing Notes

Keep in mind that this is only first 0.1 version of ufsbe.sh. Do not use it in production or important systems and make sure you have restorable backups. Like with beadm(8) in the past I plan to improve it with more useful options and also add it to the Ports tree in the future.

Feel free to share your thoughts about this tool.

I must wait till midnight to make it shown as posted on 2nd of April because if I would post it on 1st of April it would be taken as April Fool Joke which is definitely not.

Enjoy.

Updating or Upgrading

You may use the Upgrade FreeBSD with ZFS Boot Environments method with these UFS Boot Environments as well but now you will chroot(8) into /ufsbe/4 for example.

EOF

FreeBSD Enterprise 1 PB Storage

Today FreeBSD operating system turns 26 years old. 19 June is an International FreeBSD Day. This is why I got something special today :). How about using FreeBSD as an Enterprise Storage solution on real hardware? This where FreeBSD shines with all its storage features ZFS included.

Today I will show you how I have built so called Enterprise Storage based on FreeBSD system along with more then 1 PB (Petabyte) of raw capacity.

I have build various storage related systems based on FreeBSD:

This project is different. How much storage space can you squeeze from a single 4U system? It turns out a lot! Definitely more then 1 PB (1024 TB) of raw storage space.

Here is the (non clickable) Table of Contents.

  • Hardware
  • Management Interface
  • BIOS/UEFI
  • FreeBSD System
    • Disks Preparation
    • ZFS Pool Configuration
    • ZFS Settings
    • Network Configuration
    • FreeBSD Configuration
  • Purpose
  • Performance
    • Network Performance
    • Disk Subsystem Performance
  • FreeNAS
  • UPDATE 1 – BSD Now 305
  • UPDATE 2 – Real Life Pictures in Data Center

Hardware

There are 4U servers with 90-100 3.5″ drive slots which will allow you to pack 1260-1400 Terabytes of data (with 14 TB drives). Examples of such systems are:

I would use the first one – the TYAN FA100 for short name.

logo-tyan.png

While both GlusterFS and Minio clusters were cone on virtual hardware (or even FreeBSD Jails containers) this one uses real physical hardware.

The build has following specifications.

 2 x 10-Core Intel Xeon Silver 4114 CPU @ 2.20GHz
 4 x 32 GB RAM DDR4 (128 GB Total)
 2 x Intel SSD DC S3500 240 GB (System)
90 x Toshiba HDD MN07ACA12TE 12 TB (Data)
 2 x Broadcom SAS3008 Controller
 2 x Intel X710 DA-2 10GE Card
 2 x Power Supply

Price of the whole system is about $65 000 – drives included. Here is how it looks.

tyan-fa100-small.jpg

One thing that you will need is a rack cabinet that is 1200 mm long to fit that monster πŸ™‚

Management Interface

The so called Lights Out management interface is really nice. Its not bloated, well organized and works quite fast. you can create several separate user accounts or can connect to external user services like LDAP/AD/RADIUS for example.

n01.png

After logging in a simple Dashboard welcomes us.

n02.png

We have access to various Sensor information available with temperatures of system components.

n03

We have System Inventory information with installed hardware.

n04.png

There is separate Settings menu for various setup options.

n05.png

I know its 2019 but HTML5 only Remote Control (remote console) without need for any third party plugins like Java/Silverlight/Flash/… is very welcomed. It works very well too.

n06.png

n07.png

One is of course allowed to power on/off/cycle the box remotely.

n08.png

The Maintenance menu for BIOS updates.

n09.png

BIOS/UEFI

After booting into the BIOS/UEFI setup its possible to select from which drives to boot from. On the screenshots the two SSD drives prepared for system.

nas01.png

The BIOS/UEFI interface shows two Enclosures but its two Broadcom SAS3008 controllers. Some drive are attached via first Broadcom SAS3008 controller, the rest is attached via the second one, and they call them Enclosures instead od of controllers for some reason.

nas05.png

FreeBSD System

I have chosen latest FreeBSD 12.0-RELEASE for the purpose of this installation. Its generally very ‘default’ installation with ZFS mirror on two SSD disks. Nothing special.

logo-freebsd.jpg

The installation of course supports the ZFS Boot Environments bulletproof upgrades/changes feature.

# zpool list zroot
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot   220G  3.75G   216G        -         -     0%     1%  1.00x  ONLINE  -

# zpool status zroot
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da91p4  ONLINE       0     0     0
            da11p4  ONLINE       0     0     0

errors: No known data errors

# df -g
Filesystem              1G-blocks Used  Avail Capacity  Mounted on
zroot/ROOT/default            211    2    209     1%    /
devfs                           0    0      0   100%    /dev
zroot/tmp                     209    0    209     0%    /tmp
zroot/usr/home                209    0    209     0%    /usr/home
zroot/usr/ports               210    0    209     0%    /usr/ports
zroot/usr/src                 210    0    209     0%    /usr/src
zroot/var/audit               209    0    209     0%    /var/audit
zroot/var/crash               209    0    209     0%    /var/crash
zroot/var/log                 209    0    209     0%    /var/log
zroot/var/mail                209    0    209     0%    /var/mail
zroot/var/tmp                 209    0    209     0%    /var/tmp

# beadm list
BE      Active Mountpoint  Space Created
default NR     /            2.4G 2019-05-24 13:24

Disks Preparation

From all the possible setups with 90 disks of 12 TB capacity I have chosen to go the RAID60 way – its ZFS equivalent of course. With 12 disks in each RAID6 (raidz2) group – there will be 7 such groups – we will have 84 used for the ZFS pool with 6 drives left as SPARE disks – that plays well for me. The disks distribution will look more or less like that.

DISKS  CONTENT
   12  raidz2-0
   12  raidz2-1
   12  raidz2-2
   12  raidz2-3
   12  raidz2-4
   12  raidz2-5
   12  raidz2-6
    6  spares
   90  TOTAL

Here is how FreeBSD system sees these drives by camcontrol(8) command. Sorted by attached SAS controller – scbus(4).

# camcontrol devlist | sort -k 6
(AHCI SGPIO Enclosure 1.00 0001)   at scbus2 target 0 lun 0 (pass0,ses0)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 50 lun 0 (pass1,da0)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 52 lun 0 (pass2,da1)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 54 lun 0 (pass3,da2)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 56 lun 0 (pass5,da4)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 57 lun 0 (pass6,da5)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 59 lun 0 (pass7,da6)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 60 lun 0 (pass8,da7)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 66 lun 0 (pass9,da8)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 67 lun 0 (pass10,da9)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 74 lun 0 (pass11,da10)
(ATA INTEL SSDSC2KB24 0100)        at scbus3 target 75 lun 0 (pass12,da11)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 76 lun 0 (pass13,da12)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 82 lun 0 (pass14,da13)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 83 lun 0 (pass15,da14)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 85 lun 0 (pass16,da15)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 87 lun 0 (pass17,da16)
(Tyan B7118 0500)                  at scbus3 target 88 lun 0 (pass18,ses1)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 89 lun 0 (pass19,da17)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 90 lun 0 (pass20,da18)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 91 lun 0 (pass21,da19)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 92 lun 0 (pass22,da20)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 93 lun 0 (pass23,da21)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 94 lun 0 (pass24,da22)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 95 lun 0 (pass25,da23)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 96 lun 0 (pass26,da24)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 97 lun 0 (pass27,da25)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 98 lun 0 (pass28,da26)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 99 lun 0 (pass29,da27)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 100 lun 0 (pass30,da28)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 101 lun 0 (pass31,da29)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 102 lun 0 (pass32,da30)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 103 lun 0 (pass33,da31)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 104 lun 0 (pass34,da32)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 105 lun 0 (pass35,da33)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 106 lun 0 (pass36,da34)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 107 lun 0 (pass37,da35)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 108 lun 0 (pass38,da36)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 109 lun 0 (pass39,da37)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 110 lun 0 (pass40,da38)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 48 lun 0 (pass41,da39)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 49 lun 0 (pass42,da40)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 51 lun 0 (pass43,da41)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 53 lun 0 (pass44,da42)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 55 lun 0 (da43,pass45)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 59 lun 0 (pass46,da44)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 64 lun 0 (pass47,da45)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 67 lun 0 (pass48,da46)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 68 lun 0 (pass49,da47)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 69 lun 0 (pass50,da48)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 73 lun 0 (pass51,da49)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 76 lun 0 (pass52,da50)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 77 lun 0 (pass53,da51)
(Tyan B7118 0500)                  at scbus4 target 80 lun 0 (pass54,ses2)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 81 lun 0 (pass55,da52)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 82 lun 0 (pass56,da53)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 83 lun 0 (pass57,da54)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 84 lun 0 (pass58,da55)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 85 lun 0 (pass59,da56)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 86 lun 0 (pass60,da57)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 87 lun 0 (pass61,da58)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 88 lun 0 (pass62,da59)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 89 lun 0 (da63,pass66)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 90 lun 0 (pass64,da61)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 91 lun 0 (pass65,da62)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 92 lun 0 (da60,pass63)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 94 lun 0 (pass67,da64)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 97 lun 0 (pass68,da65)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 98 lun 0 (pass69,da66)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 99 lun 0 (pass70,da67)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 100 lun 0 (pass71,da68)
(Tyan B7118 0500)                  at scbus4 target 101 lun 0 (pass72,ses3)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 102 lun 0 (pass73,da69)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 103 lun 0 (pass74,da70)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 104 lun 0 (pass75,da71)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 105 lun 0 (pass76,da72)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 106 lun 0 (pass77,da73)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 107 lun 0 (pass78,da74)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 108 lun 0 (pass79,da75)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 109 lun 0 (pass80,da76)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 110 lun 0 (pass81,da77)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 111 lun 0 (pass82,da78)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 112 lun 0 (pass83,da79)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 113 lun 0 (pass84,da80)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 114 lun 0 (pass85,da81)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 115 lun 0 (pass86,da82)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 116 lun 0 (pass87,da83)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 117 lun 0 (pass88,da84)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 118 lun 0 (pass89,da85)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 119 lun 0 (pass90,da86)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 120 lun 0 (pass91,da87)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 121 lun 0 (pass92,da88)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 122 lun 0 (pass93,da89)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 123 lun 0 (pass94,da90)
(ATA INTEL SSDSC2KB24 0100)        at scbus4 target 124 lun 0 (pass95,da91)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 125 lun 0 (da3,pass4)

One my ask how to identify which disk is which when the FAILURE will came … this is where FreeBSD’s sesutil(8) command comes handy.

# sesutil locate all off
# sesutil locate da64 on

The first sesutil(8) command disables all location lights in the enclosure. The second one turns on the identification for disk da64.

I will also make sure to NOT use the whole space of each drive. Such idea may be pointless but imagine the following situation. Five 12 TB disks failed after 3 years. You can not get the same model drives so you get other 12 TB drives, maybe even from other manufacturer.

# grep da64 /var/run/dmesg.boot
da64 at mpr1 bus 0 scbus4 target 93 lun 0
da64:  Fixed Direct Access SPC-4 SCSI device
da64: Serial Number 98G0A1EQF95G
da64: 1200.000MB/s transfers
da64: Command Queueing enabled
da64: 11444224MB (23437770752 512 byte sectors)

A single 12 TB drive has 23437770752 of 512 byte sectors which equals 12000138625024 bytes of raw capacity.

# expr 23437770752 \* 512
12000138625024

Now image that these other 12 TB drives from other manufacturer will come with 4 bytes smaller size … ZFS will not allow their usage because their size is smaller.

This is why I will use exactly 11175 GB size of each drive which is more or less 1 GB short of its total 11176 GB size.

Below is command that will do that for me for all 90 disks.

# camcontrol devlist \
    | grep TOSHIBA \
    | awk '{print $NF}' \
    | awk -F ',' '{print $2}' \
    | tr -d ')' \
    | while read DISK
      do
        gpart destroy -F                   ${DISK} 1> /dev/null 2> /dev/null
        gpart create -s GPT                ${DISK}
        gpart add -t freebsd-zfs -s 11175G ${DISK}
      done

# gpart show da64
=>         40  23437770672  da64  GPT  (11T)
           40  23435673600     1  freebsd-zfs  (11T)
  23435673640      2097072        - free -  (1.0G)


ZFS Pool Configuration

Next, we will have to create our ZFS pool, its probably the longest zpool command I have ever executed πŸ™‚

As the Toshiba 12 TB disks have 4k sectors we will need to set vfs.zfs.min_auto_ashift to 12 to force them.

# sysctl vfs.zfs.min_auto_ashift=12
vfs.zfs.min_auto_ashift: 12 -> 12

# zpool create nas02 \
    raidz2  da0p1  da1p1  da2p1  da3p1  da4p1  da5p1  da6p1  da7p1  da8p1  da9p1 da10p1 da12p1 \
    raidz2 da13p1 da14p1 da15p1 da16p1 da17p1 da18p1 da19p1 da20p1 da21p1 da22p1 da23p1 da24p1 \
    raidz2 da25p1 da26p1 da27p1 da28p1 da29p1 da30p1 da31p1 da32p1 da33p1 da34p1 da35p1 da36p1 \
    raidz2 da37p1 da38p1 da39p1 da40p1 da41p1 da42p1 da43p1 da44p1 da45p1 da46p1 da47p1 da48p1 \
    raidz2 da49p1 da50p1 da51p1 da52p1 da53p1 da54p1 da55p1 da56p1 da57p1 da58p1 da59p1 da60p1 \
    raidz2 da61p1 da62p1 da63p1 da64p1 da65p1 da66p1 da67p1 da68p1 da69p1 da70p1 da71p1 da72p1 \
    raidz2 da73p1 da74p1 da75p1 da76p1 da77p1 da78p1 da79p1 da80p1 da81p1 da82p1 da83p1 da84p1 \
    spare  da85p1 da86p1 da87p1 da88p1 da89p1 da90p1

# zpool status
  pool: nas02
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Fri May 31 10:26:29 2019
config:

        NAME        STATE     READ WRITE CKSUM
        nas02       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            da0p1   ONLINE       0     0     0
            da1p1   ONLINE       0     0     0
            da2p1   ONLINE       0     0     0
            da3p1   ONLINE       0     0     0
            da4p1   ONLINE       0     0     0
            da5p1   ONLINE       0     0     0
            da6p1   ONLINE       0     0     0
            da7p1   ONLINE       0     0     0
            da8p1   ONLINE       0     0     0
            da9p1   ONLINE       0     0     0
            da10p1  ONLINE       0     0     0
            da12p1  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            da13p1  ONLINE       0     0     0
            da14p1  ONLINE       0     0     0
            da15p1  ONLINE       0     0     0
            da16p1  ONLINE       0     0     0
            da17p1  ONLINE       0     0     0
            da18p1  ONLINE       0     0     0
            da19p1  ONLINE       0     0     0
            da20p1  ONLINE       0     0     0
            da21p1  ONLINE       0     0     0
            da22p1  ONLINE       0     0     0
            da23p1  ONLINE       0     0     0
            da24p1  ONLINE       0     0     0
          raidz2-2  ONLINE       0     0     0
            da25p1  ONLINE       0     0     0
            da26p1  ONLINE       0     0     0
            da27p1  ONLINE       0     0     0
            da28p1  ONLINE       0     0     0
            da29p1  ONLINE       0     0     0
            da30p1  ONLINE       0     0     0
            da31p1  ONLINE       0     0     0
            da32p1  ONLINE       0     0     0
            da33p1  ONLINE       0     0     0
            da34p1  ONLINE       0     0     0
            da35p1  ONLINE       0     0     0
            da36p1  ONLINE       0     0     0
          raidz2-3  ONLINE       0     0     0
            da37p1  ONLINE       0     0     0
            da38p1  ONLINE       0     0     0
            da39p1  ONLINE       0     0     0
            da40p1  ONLINE       0     0     0
            da41p1  ONLINE       0     0     0
            da42p1  ONLINE       0     0     0
            da43p1  ONLINE       0     0     0
            da44p1  ONLINE       0     0     0
            da45p1  ONLINE       0     0     0
            da46p1  ONLINE       0     0     0
            da47p1  ONLINE       0     0     0
            da48p1  ONLINE       0     0     0
          raidz2-4  ONLINE       0     0     0
            da49p1  ONLINE       0     0     0
            da50p1  ONLINE       0     0     0
            da51p1  ONLINE       0     0     0
            da52p1  ONLINE       0     0     0
            da53p1  ONLINE       0     0     0
            da54p1  ONLINE       0     0     0
            da55p1  ONLINE       0     0     0
            da56p1  ONLINE       0     0     0
            da57p1  ONLINE       0     0     0
            da58p1  ONLINE       0     0     0
            da59p1  ONLINE       0     0     0
            da60p1  ONLINE       0     0     0
          raidz2-5  ONLINE       0     0     0
            da61p1  ONLINE       0     0     0
            da62p1  ONLINE       0     0     0
            da63p1  ONLINE       0     0     0
            da64p1  ONLINE       0     0     0
            da65p1  ONLINE       0     0     0
            da66p1  ONLINE       0     0     0
            da67p1  ONLINE       0     0     0
            da68p1  ONLINE       0     0     0
            da69p1  ONLINE       0     0     0
            da70p1  ONLINE       0     0     0
            da71p1  ONLINE       0     0     0
            da72p1  ONLINE       0     0     0
          raidz2-6  ONLINE       0     0     0
            da73p1  ONLINE       0     0     0
            da74p1  ONLINE       0     0     0
            da75p1  ONLINE       0     0     0
            da76p1  ONLINE       0     0     0
            da77p1  ONLINE       0     0     0
            da78p1  ONLINE       0     0     0
            da79p1  ONLINE       0     0     0
            da80p1  ONLINE       0     0     0
            da81p1  ONLINE       0     0     0
            da82p1  ONLINE       0     0     0
            da83p1  ONLINE       0     0     0
            da84p1  ONLINE       0     0     0
        spares
          da85p1    AVAIL
          da86p1    AVAIL
          da87p1    AVAIL
          da88p1    AVAIL
          da89p1    AVAIL
          da90p1    AVAIL

errors: No known data errors

# zpool list nas02
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
nas02   915T  1.42M   915T        -         -     0%     0%  1.00x  ONLINE  -

# zfs list nas02
NAME    USED  AVAIL  REFER  MOUNTPOINT
nas02    88K   675T   201K  none

ZFS Settings

As the primary role of this storage would be keeping files I will use one of the largest values for recordsize – 1 MB – this helps getting better compression ratio.

… but it will also serve as iSCSI Target in which we will try to fit in the native 4k blocks – thus 4096 bytes setting for iSCSI.

# zfs set compression=lz4         nas02
# zfs set atime=off               nas02
# zfs set mountpoint=none         nas02
# zfs set recordsize=1m           nas02
# zfs set redundant_metadata=most nas02
# zfs create                      nas02/nfs
# zfs create                      nas02/smb
# zfs create                      nas02/iscsi
# zfs set recordsize=4k           nas02/iscsi

Also one word on redundant_metadata as its not that obvious parameter. To quote the zfs(8) man page.

# man zfs
(...)
redundant_metadata=all | most
  Controls what types of metadata are stored redundantly.  ZFS stores
  an extra copy of metadata, so that if a single block is corrupted,
  the amount of user data lost is limited.  This extra copy is in
  addition to any redundancy provided at the pool level (e.g. by
  mirroring or RAID-Z), and is in addition to an extra copy specified
  by the copies property (up to a total of 3 copies).  For example if
  the pool is mirrored, copies=2, and redundant_metadata=most, then ZFS
  stores 6 copies of most metadata, and 4 copies of data and some
  metadata.

  When set to all, ZFS stores an extra copy of all metadata.  If a
  single on-disk block is corrupt, at worst a single block of user data
  (which is recordsize bytes long can be lost.)

  When set to most, ZFS stores an extra copy of most types of metadata.
  This can improve performance of random writes, because less metadata
  must be written.  In practice, at worst about 100 blocks (of
  recordsize bytes each) of user data can be lost if a single on-disk
  block is corrupt.  The exact behavior of which metadata blocks are
  stored redundantly may change in future releases.

  The default value is all.
(...)

From the description above we can see that its mostly useful on single device pools because when we have redundancy based on RAIDZ2 (RAID6 equivalent) we do not need to keep additional redundant copies of metadata. This helps to increase write performance.

For the record – iSCSI ZFS zvols are create with command like that one below – as sparse files – also called Thin Provisioning mode.

# zfs create -s -V 16T nas02/iscsi/test

As we have SPARE disks we will also need to enable the zfsd(8) daemon by adding zfsd_enable=YES to the /etc/rc.conf file.

We also need to enable autoreplace property for our pool because by default its set to off.

# zpool get autoreplace nas02
NAME   PROPERTY     VALUE    SOURCE
nas02  autoreplace  off      default

# zpool set autoreplace=on nas02

# zpool get autoreplace nas02
NAME   PROPERTY     VALUE    SOURCE
nas02  autoreplace  on       local

Other ZFS settings are in the /boot/loader.conf file. As this system has 128 GB RAM we will let ZFS use 50 to 75% of that amount for ARC.

# grep vfs.zfs /boot/loader.conf
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=64G
  vfs.zfs.arc_max=96G
  vfs.zfs.deadman_enabled=0

Network Configuration

This is what I really like about FreeBSD. To setup LACP link aggregation tou just need 5 lines in /etc/rc.conf file. On Red Hat Enterprise Linux you would need several files with many lines each.

# head -5 /etc/rc.conf
  defaultrouter="10.20.30.254"
  ifconfig_ixl0="up"
  ifconfig_ixl1="up"
  cloned_interfaces="lagg0"
  ifconfig_lagg0="laggproto lacp laggport ixl0 laggport ixl1 10.20.30.2/24 up"

# ifconfig lagg0
lagg0: flags=8843 metric 0 mtu 1500
        options=e507bb
        ether a0:42:3f:a0:42:3f
        inet 10.20.30.2 netmask 0xffffff00 broadcast 10.20.30.255
        laggproto lacp lagghash l2,l3,l4
        laggport: ixl0 flags=1c
        laggport: ixl1 flags=1c
        groups: lagg
        media: Ethernet autoselect
        status: active
        nd6 options=29

The Intel X710 DA-2 10GE network adapter is fully supported under FreeBSD by the ixl(4) driver.

intel-x710-da-2.jpg

Cisco Nexus Configuration

This is the Cisco Nexus configuration needed to enable LACP aggregation.

First the ports.

NEXUS-1  Eth1/32  NAS02_IXL0  connected 3  full  a-10G  SFP-H10GB-A
NEXUS-2  Eth1/32  NAS02_IXL1  connected 3  full  a-10G  SFP-H10GB-A

… and now aggregation.

interface Ethernet1/32
  description NAS02_IXL1
  switchport
  switchport access vlan 3
  mtu 9216
  channel-group 128 mode active
  no shutdown
!
interface port-channel128
  description NAS02
  switchport
  switchport access vlan 3
  mtu 9216
  vpc 128

… and the same/similar on the second Cisco Nexus NEXUS-2 switch.

FreeBSD Configuration

These are three most important configuration files on any FreeBSD system.

I will now post all settings I use on this storage system.

The /etc/rc.conf file.

# cat /etc/rc.conf
# NETWORK
  hostname="nas02.local"
  defaultrouter="10.20.30.254"
  ifconfig_ixl0="up"
  ifconfig_ixl1="up"
  cloned_interfaces="lagg0"
  ifconfig_lagg0="laggproto lacp laggport ixl0 laggport ixl1 10.20.30.2/24 up"

# KERNEL MODULES
  kld_list="${kld_list} aesni"

# DAEMON | YES
  zfs_enable=YES
  zfsd_enable=YES
  sshd_enable=YES
  ctld_enable=YES
  powerd_enable=YES

# DAEMON | NFS SERVER
  nfs_server_enable=YES
  nfs_client_enable=YES
  rpc_lockd_enable=YES
  rpc_statd_enable=YES
  rpcbind_enable=YES
  mountd_enable=YES
  mountd_flags="-r"

# OTHER
  dumpdev=NO

The /boot/loader.conf file.

# cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=3
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# DISABLE INTEL HT
  machdep.hyperthreading_allowed=0

# UPDATE INTEL CPU MICROCODE AT BOOT BEFORE KERNEL IS LOADED
  cpu_microcode_load=YES
  cpu_microcode_name=/boot/firmware/intel-ucode.bin

# MODULES
  zfs_load=YES
  aio_load=YES

# RACCT/RCTL RESOURCE LIMITS
  kern.racct.enable=1

# DISABLE MEMORY TEST @ BOOT
  hw.memtest.tests=0

# PIPE KVA LIMIT | 320 MB
  kern.ipc.maxpipekva=335544320

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024
  kern.ipc.semmns=512
  kern.ipc.semmnu=256
  kern.ipc.semume=256
  kern.ipc.semopm=256
  kern.ipc.semmsl=512

# LARGE PAGE MAPPINGS
  vm.pmap.pg_ps_enabled=1

# ZFS TUNING
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=64G
  vfs.zfs.arc_max=96G

# ZFS DISABLE PANIC ON STALE I/O
  vfs.zfs.deadman_enabled=0

# NEWCONS SUSPEND
  kern.vt.suspendswitch=0

The /etc/sysctl.conf file.

# cat /etc/sysctl.conf
# ZFS ASHIFT
  vfs.zfs.min_auto_ashift=12

# SECURITY
  security.bsd.stack_guard_page=1

# SECURITY INTEL MDS (MICROARCHITECTURAL DATA SAMPLING) MITIGATION
  hw.mds_disable=3

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0

# IPC
  kern.ipc.shmmax=4294967296
  kern.ipc.shmall=2097152
  kern.ipc.somaxconn=4096
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

# NETWORK
  kern.ipc.maxsockbuf=16777216
  kern.ipc.soacceptqueue=1024
  net.inet.tcp.recvbuf_max=8388608
  net.inet.tcp.sendbuf_max=8388608
  net.inet.tcp.mssdflt=1460
  net.inet.tcp.minmss=1300
  net.inet.tcp.syncache.rexmtlimit=0
  net.inet.tcp.syncookies=0
  net.inet.tcp.tso=0
  net.inet.ip.process_options=0
  net.inet.ip.random_id=1
  net.inet.ip.redirect=0
  net.inet.icmp.drop_redirect=1
  net.inet.tcp.always_keepalive=0
  net.inet.tcp.drop_synfin=1
  net.inet.tcp.fast_finwait2_recycle=1
  net.inet.tcp.icmp_may_rst=0
  net.inet.tcp.msl=8192
  net.inet.tcp.path_mtu_discovery=0
  net.inet.udp.blackhole=1
  net.inet.tcp.blackhole=2
  net.inet.tcp.hostcache.expire=7200
  net.inet.tcp.delacktime=20

Purpose

Why one would built such appliance? Because its a lot cheaper then to get the ‘branded’ one. Think about Dell EMC Data Domain for example – and not just ‘any’ Data Domain but almost the highest one – the Data Domain DD9300 at least. It would cost about ten times more at least … with smaller capacity and taking not 4U but closer to 14U with three DS60 expanders.

But you can actually make this FreeBSD Enterprise Storage behave like Dell EMC Data Domain .. or like their Dell EMC Elastic Cloud Storage for example.

The Dell EMC CloudBoost can be deployed somewhere on your VMware stack to provide the DDBoost deduplication. Then you would need OpenStack Swift as its one of the supported backed devices.

emc-cloudboost-swift-cover.png

emc-cloudboost-swift-support.png

The OpenStack Swift package in FreeBSD is about 4-5 years behind reality (2.2.2) so you will have to use Bhyve here.

# pkg search swift
(...)
py27-swift-2.2.2_1             Highly available, distributed, eventually consistent object/blob store
(...)

Create Bhyve virtual machine on this FreeBSD Enterprise Storage with CentOS 7.6 system for example, then setup Swift there, but it will work. With 20 physical cores to spare and 128 GB RAM you would not even noticed its there.

This way you can use Dell EMC Networker with more then ten times cheaper storage.

In the past I also wrote about IBM Spectrum Protect (TSM) which would also greatly benefit from FreeBSD Enterprise Storage. I actually also use this FreeBSD based storage as space for IBM Spectrum Protect (TSM) container pool directories. Exported via iSCSI works like a charm.

You can also compare that FreeBSD Enterprise Storage to other storage appliances like iXsystems TrueNAS or EXAGRID.

Performance

You for sure would want to know how fast this FreeBSD Enterprise Storage performs πŸ™‚

I will share all performance data that I gathered with a pleasure.

Network Performance

First the network performance.

I user iperf3 as the benchmark.

I started the server on the FreeBSD side.

# iperf3 -s

… and then I started client on the Windows Server 2016 machine.

C:\iperf-3.1.3-win64>iperf3.exe -c nas02 -P 8
(...)
[SUM]   0.00-10.00  sec  10.8 GBytes  9.26 Gbits/sec                  receiver
(..)

This is with MTU 1500 – no Jumbo frames unfortunatelly 😦

Unfortunatelly this system has only one physical 10GE interface but I did other test also. Using two such boxes with single 10GE interface. That saturated the dual 10GE LACP on FreeBSD side nicely.

I also exported NFS and iSCSI to Red Hat Enterprise Linux system. The network performance was about 500-600 MB/s on single 10GE interface. That would be 1000-1200 MB/s on LACP aggregation.

Disk Subsystem Performance

Now the disk subsystem.

First some naive test using diskinfo(8) FreeBSD’s builtin tool.

# diskinfo -ctv /dev/da12
/dev/da12
        512             # sectorsize
        12000138625024  # mediasize in bytes (11T)
        23437770752     # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        1458933         # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        ATA TOSHIBA MG07ACA1    # Disk descr.
        98H0A11KF95G    # Disk ident.
        id1,enc@n500e081010445dbd/type@0/slot@c/elmdesc@ArrayDevice11   # Physical path
        No              # TRIM/UNMAP support
        7200            # Rotation rate in RPM
        Not_Zoned       # Zone Mode

I/O command overhead:
        time to read 10MB block      0.067031 sec       =    0.003 msec/sector
        time to read 20480 sectors   2.619989 sec       =    0.128 msec/sector
        calculated command overhead                     =    0.125 msec/sector

Seek times:
        Full stroke:      250 iter in   5.665880 sec =   22.664 msec
        Half stroke:      250 iter in   4.263047 sec =   17.052 msec
        Quarter stroke:   500 iter in   6.867914 sec =   13.736 msec
        Short forward:    400 iter in   3.057913 sec =    7.645 msec
        Short backward:   400 iter in   1.979287 sec =    4.948 msec
        Seq outer:       2048 iter in   0.169472 sec =    0.083 msec
        Seq inner:       2048 iter in   0.469630 sec =    0.229 msec

Transfer rates:
        outside:       102400 kbytes in   0.478251 sec =   214114 kbytes/sec
        middle:        102400 kbytes in   0.605701 sec =   169060 kbytes/sec
        inside:        102400 kbytes in   1.303909 sec =    78533 kbytes/sec

So now we know how fast a single disk is.

Let’s repeast the same test on the ZFS zvol device.

# diskinfo -ctv /dev/zvol/nas02/iscsi/test
/dev/zvol/nas02/iscsi/test
        512             # sectorsize
        17592186044416  # mediasize in bytes (16T)
        34359738368     # mediasize in sectors
        65536           # stripesize
        0               # stripeoffset
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

I/O command overhead:
        time to read 10MB block      0.004512 sec       =    0.000 msec/sector
        time to read 20480 sectors   0.196824 sec       =    0.010 msec/sector
        calculated command overhead                     =    0.009 msec/sector

Seek times:
        Full stroke:      250 iter in   0.006151 sec =    0.025 msec
        Half stroke:      250 iter in   0.008228 sec =    0.033 msec
        Quarter stroke:   500 iter in   0.014062 sec =    0.028 msec
        Short forward:    400 iter in   0.010564 sec =    0.026 msec
        Short backward:   400 iter in   0.011725 sec =    0.029 msec
        Seq outer:       2048 iter in   0.028198 sec =    0.014 msec
        Seq inner:       2048 iter in   0.028416 sec =    0.014 msec

Transfer rates:
        outside:       102400 kbytes in   0.036938 sec =  2772213 kbytes/sec
        middle:        102400 kbytes in   0.043076 sec =  2377194 kbytes/sec
        inside:        102400 kbytes in   0.034260 sec =  2988908 kbytes/sec

Almost 3 GB/s – not bad.

Time for even more oldschool test – the immortal dd(8) command.

This is with compression=off setting.

One process.

# dd if=/dev/zero of=FILE bs=128m status=progress
26172456960 bytes (26 GB, 24 GiB) transferred 16.074s, 1628 MB/s
202+0 records in
201+0 records out
26977763328 bytes transferred in 16.660884 secs (1619227644 bytes/sec)

Four concurrent processes.

# dd if=/dev/zero of=FILE${X} bs=128m status=progress
80933289984 bytes (81 GB, 75 GiB) transferred 98.081s, 825 MB/s
608+0 records in
608+0 records out
81604378624 bytes transferred in 98.990579 secs (824365101 bytes/sec)

Eight concurrent processes.

# dd if=/dev/zero of=FILE${X} bs=128m status=progress
174214610944 bytes (174 GB, 162 GiB) transferred 385.042s, 452 MB/s
1302+0 records in
1301+0 records out
174617264128 bytes transferred in 385.379296 secs (453104943 bytes/sec)

Lets summarize that data.

1 STREAM(s) ~ 1600 MB/s ~ 1.5 GB/s
4 STREAM(s) ~ 3300 MB/s ~ 3.2 GB/s
8 STREAM(s) ~ 3600 MB/s ~ 3.5 GB/s

So the disk subsystem is able to squeeze 3.5 GB/s of sustained speed in sequential writes. That us that if we would want to saturate it we would need to add additional two 10GE interfaces.

The disks were stressed only to about 55% which you can see in other useful FreeBSD tool – gstat(8) command.

n10.png

Time for more ‘intelligent’ tests. The blogbench test.

First with compression disabled.

# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          6476
Final score for reads :        660436

blogbench -d .  280.58s user 4974.41s system 1748% cpu 5:00.54 total

Second with compression set to LZ4.

# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          7087
Final score for reads :        733932

blogbench -d .  299.08s user 5415.04s system 1900% cpu 5:00.68 total

Compression did not helped much, but helped.

To have some comparision we will run the same test on the system ZFS pool – two Intel SSD DC S3500 240 GB drives in mirror which have following features.

The Intel SSD DC S3500 240 GB drives:

  • Sequential Read (up to) 500 MB/s
  • Sequential Write (up to) 260 MB/s
  • Random Read (100% Span) 75000 IOPS
  • Random Write (100% Span) 7500 IOPS
# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          6109
Final score for reads :        654099

blogbench -d .  278.73s user 5058.75s system 1777% cpu 5:00.30 total

Now the randomio test. Its multithreaded disk I/O microbenchmark.

The usage is as follows.

usage: randomio filename nr_threads write_fraction_of_io fsync_fraction_of_writes io_size nr_seconds_between_samples

filename                    Filename or device to read/write.
write_fraction_of_io        What fraction of I/O should be writes - for example 0.25 for 25% write.
fsync_fraction_of_writes    What fraction of writes should be fsync'd.
io_size                     How many bytes to read/write (multiple of 512 bytes).
nr_seconds_between_samples  How many seconds to average samples over.

The randomio with 4k block.

# zfs create -s -V 1T nas02/iscsi/test
# randomio /dev/zvol/nas02/iscsi/test 8 0.25 1 4096 10
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
54137.7 |40648.4   0.0    0.1  575.8    2.2 |13489.4   0.0    0.3  405.8    2.6
66248.4 |49641.5   0.0    0.1   19.6    0.3 |16606.9   0.0    0.2   26.4    0.7
66411.0 |49817.2   0.0    0.1   19.7    0.3 |16593.8   0.0    0.2   20.3    0.7
64158.9 |48142.8   0.0    0.1  254.7    0.7 |16016.1   0.0    0.2  130.4    1.0
48454.1 |36390.8   0.0    0.1  542.8    2.7 |12063.3   0.0    0.3  507.5    3.2
66796.1 |50067.4   0.0    0.1   24.1    0.3 |16728.7   0.0    0.2   23.4    0.7
58512.2 |43851.7   0.0    0.1  576.5    1.7 |14660.5   0.0    0.2  307.2    1.7
63195.8 |47341.8   0.0    0.1  261.6    0.9 |15854.1   0.0    0.2  361.1    1.9
67086.0 |50335.6   0.0    0.1   20.4    0.3 |16750.4   0.0    0.2   25.1    0.8
67429.8 |50549.6   0.0    0.1   21.8    0.3 |16880.3   0.0    0.2   20.6    0.7
^C

… and with 512 sector.

# zfs create -s -V 1T nas02/iscsi/test
# randomio /dev/zvol/nas02/iscsi/TEST 8 0.25 1 512 10
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
58218.9 |43712.0   0.0    0.1  501.5    2.1 |14506.9   0.0    0.2  272.5    1.6
66325.3 |49703.8   0.0    0.1  352.0    0.9 |16621.4   0.0    0.2  352.0    1.5
68130.5 |51100.8   0.0    0.1   24.6    0.3 |17029.7   0.0    0.2   24.4    0.7
68465.3 |51352.3   0.0    0.1   19.9    0.3 |17112.9   0.0    0.2   23.8    0.7
54903.5 |41249.1   0.0    0.1  399.3    1.9 |13654.4   0.0    0.3  335.8    2.2
61259.8 |45898.7   0.0    0.1  574.6    1.7 |15361.0   0.0    0.2  371.5    1.7
68483.3 |51313.1   0.0    0.1   22.9    0.3 |17170.3   0.0    0.2   26.1    0.7
56713.7 |42524.7   0.0    0.1  373.5    1.8 |14189.1   0.0    0.2  438.5    2.7
68861.4 |51657.0   0.0    0.1   21.0    0.3 |17204.3   0.0    0.2   21.7    0.7
68602.0 |51438.4   0.0    0.1   19.5    0.3 |17163.7   0.0    0.2   23.7    0.7
^C

Both randomio tests were run with compression set to LZ4.

Next is bonnie++ benchmark. It has been run with compression set to LZ4.

# bonnie++ -d . -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nas02.local 261368M   139  99 775132  99 589190  99   383  99 1638929  99 12930 2046
Latency             60266us    7030us    7059us   21553us    3844us    5710us
Version  1.97       ------Sequential Create------ --------Random Create--------
nas02.local         -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 12680  44 +++++ +++ +++++ +++ 30049  99
Latency              2619us      43us     714ms    2748us      28us      58us

… and last but not least the fio benchmark. Also with LZ4 compression enabled.

# fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=64
fio-3.13
Starting 1 process
Jobs: 1 (f=1): [m(1)][98.0%][r=38.0MiB/s,w=12.2MiB/s][r=9735,w=3128 IOPS][eta 00m:05s]
test: (groupid=0, jobs=1): err= 0: pid=35368: Tue Jun 18 15:14:44 2019
  read: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(3070MiB/248872msec)
   bw (  KiB/s): min= 9404, max=57732, per=98.72%, avg=12469.84, stdev=3082.99, samples=497
   iops        : min= 2351, max=14433, avg=3117.15, stdev=770.74, samples=497
  write: IOPS=1055, BW=4222KiB/s (4323kB/s)(1026MiB/248872msec)
   bw (  KiB/s): min= 3179, max=18914, per=98.71%, avg=4166.60, stdev=999.23, samples=497
   iops        : min=  794, max= 4728, avg=1041.25, stdev=249.76, samples=497
  cpu          : usr=1.11%, sys=88.64%, ctx=677981, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=3070MiB (3219MB), run=248872-248872msec
  WRITE: bw=4222KiB/s (4323kB/s), 4222KiB/s-4222KiB/s (4323kB/s-4323kB/s), io=1026MiB (1076MB), run=248872-248872msec

Dunno how about you but I am satisfied with performance πŸ™‚

FreeNAS

Originally I really wanted to use FreeNAS on these boxes and I even installed FreeNAS on them. It run nicely but … the security part of FreeNAS was not best.

This is the output of pkg audit command. Quite scarry.

root@freenas[~]# pkg audit -F
Fetching vuln.xml.bz2: 100%  785 KiB 804.3kB/s    00:01
python27-2.7.15 is vulnerable:
Python -- NULL pointer dereference vulnerability
CVE: CVE-2019-5010
WWW: https://vuxml.FreeBSD.org/freebsd/d74371d2-4fee-11e9-a5cd-1df8a848de3d.html

curl-7.62.0 is vulnerable:
curl -- multiple vulnerabilities
CVE: CVE-2019-3823
CVE: CVE-2019-3822
CVE: CVE-2018-16890
WWW: https://vuxml.FreeBSD.org/freebsd/714b033a-2b09-11e9-8bc3-610fd6e6cd05.html

libgcrypt-1.8.2 is vulnerable:
libgcrypt -- side-channel attack vulnerability
CVE: CVE-2018-0495
WWW: https://vuxml.FreeBSD.org/freebsd/9b5162de-6f39-11e8-818e-e8e0b747a45a.html

python36-3.6.5_1 is vulnerable:
Python -- NULL pointer dereference vulnerability
CVE: CVE-2019-5010
WWW: https://vuxml.FreeBSD.org/freebsd/d74371d2-4fee-11e9-a5cd-1df8a848de3d.html

pango-1.42.0 is vulnerable:
pango -- remote DoS vulnerability
CVE: CVE-2018-15120
WWW: https://vuxml.FreeBSD.org/freebsd/5a757a31-f98e-4bd4-8a85-f1c0f3409769.html

py36-requests-2.18.4 is vulnerable:
www/py-requests -- Information disclosure vulnerability
WWW: https://vuxml.FreeBSD.org/freebsd/50ad9a9a-1e28-11e9-98d7-0050562a4d7b.html

libnghttp2-1.31.0 is vulnerable:
nghttp2 -- Denial of service due to NULL pointer dereference
CVE: CVE-2018-1000168
WWW: https://vuxml.FreeBSD.org/freebsd/1fccb25e-8451-438c-a2b9-6a021e4d7a31.html

gnupg-2.2.6 is vulnerable:
gnupg -- unsanitized output (CVE-2018-12020)
CVE: CVE-2017-7526
CVE: CVE-2018-12020
WWW: https://vuxml.FreeBSD.org/freebsd/7da0417f-6b24-11e8-84cc-002590acae31.html

py36-cryptography-2.1.4 is vulnerable:
py-cryptography -- tag forgery vulnerability
CVE: CVE-2018-10903
WWW: https://vuxml.FreeBSD.org/freebsd/9e2d0dcf-9926-11e8-a92d-0050562a4d7b.html

perl5-5.26.1 is vulnerable:
perl -- multiple vulnerabilities
CVE: CVE-2018-6913
CVE: CVE-2018-6798
CVE: CVE-2018-6797
WWW: https://vuxml.FreeBSD.org/freebsd/41c96ffd-29a6-4dcc-9a88-65f5038fa6eb.html

libssh2-1.8.0,3 is vulnerable:
libssh2 -- multiple issues
CVE: CVE-2019-3862
CVE: CVE-2019-3861
CVE: CVE-2019-3860
CVE: CVE-2019-3858
WWW: https://vuxml.FreeBSD.org/freebsd/6e58e1e9-2636-413e-9f84-4c0e21143628.html

git-lite-2.17.0 is vulnerable:
Git -- Fix memory out-of-bounds and remote code execution vulnerabilities (CVE-2018-11233 and CVE-2018-11235)
CVE: CVE-2018-11235
CVE: CVE-2018-11233
WWW: https://vuxml.FreeBSD.org/freebsd/c7a135f4-66a4-11e8-9e63-3085a9a47796.html

gnutls-3.5.18 is vulnerable:
GnuTLS -- double free, invalid pointer access
CVE: CVE-2019-3836
CVE: CVE-2019-3829
WWW: https://vuxml.FreeBSD.org/freebsd/fb30db8f-62af-11e9-b0de-001cc0382b2f.html

13 problem(s) in the installed packages found.

root@freenas[~]# uname -a
FreeBSD freenas.local 11.2-STABLE FreeBSD 11.2-STABLE #0 r325575+95cc58ca2a0(HEAD): Mon May  6 19:08:58 EDT 2019     root@mp20.tn.ixsystems.com:/freenas-releng/freenas/_BE/objs/freenas-releng/freenas/_BE/os/sys/FreeNAS.amd64  amd64

root@freenas[~]# freebsd-version -uk
11.2-STABLE
11.2-STABLE

root@freenas[~]# sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     uwsgi-3.6  4006  3  tcp4   127.0.0.1:9042        *:*
root     uwsgi-3.6  3188  3  tcp4   127.0.0.1:9042        *:*
nobody   mdnsd      3144  4  udp4   *:31417               *:*
nobody   mdnsd      3144  6  udp4   *:5353                *:*
www      nginx      3132  6  tcp4   *:443                 *:*
www      nginx      3132  8  tcp4   *:80                  *:*
root     nginx      3131  6  tcp4   *:443                 *:*
root     nginx      3131  8  tcp4   *:80                  *:*
root     ntpd       2823  21 udp4   *:123                 *:*
root     ntpd       2823  22 udp4   10.49.13.99:123       *:*
root     ntpd       2823  25 udp4   127.0.0.1:123         *:*
root     sshd       2743  5  tcp4   *:22                  *:*
root     syslog-ng  2341  19 udp4   *:1031                *:*
nobody   mdnsd      2134  3  udp4   *:39020               *:*
nobody   mdnsd      2134  5  udp4   *:5353                *:*
root     python3.6  236   22 tcp4   *:6000                *:*


I even tried to get explanation why FreeNAS has such outdated and insecure packages in their latest version – FreeNAS 11.2-U3 Vulnerabilities – a thread I started on their forums.

Unfortunatelly its their policy which you can summarize as ‘do not touch/change versions if its working’ – at last I got this implression.

Because if these security holes I can not recommend the use of FreeNAS and I movedto original – the FreeBSD system.

One other interesting note. After I installed FreeBSD I wanted to import the ZFS pool created by FreeNAS. This is what I got after executing the zpool import command.

# zpool import
   pool: nas02_gr06
     id: 1275660523517109367
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr06  ONLINE
          raidz2-0  ONLINE
            da58p2  ONLINE
            da59p2  ONLINE
            da60p2  ONLINE
            da61p2  ONLINE
            da62p2  ONLINE
            da63p2  ONLINE
            da64p2  ONLINE
            da26p2  ONLINE
            da65p2  ONLINE
            da23p2  ONLINE
            da29p2  ONLINE
            da66p2  ONLINE
            da67p2  ONLINE
            da68p2  ONLINE
        spares
          da69p2

   pool: nas02_gr05
     id: 5642709896812665361
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr05  ONLINE
          raidz2-0  ONLINE
            da20p2  ONLINE
            da30p2  ONLINE
            da34p2  ONLINE
            da50p2  ONLINE
            da28p2  ONLINE
            da38p2  ONLINE
            da51p2  ONLINE
            da52p2  ONLINE
            da27p2  ONLINE
            da32p2  ONLINE
            da53p2  ONLINE
            da54p2  ONLINE
            da55p2  ONLINE
            da56p2  ONLINE
        spares
          da57p2

   pool: nas02_gr04
     id: 2460983830075205166
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr04  ONLINE
          raidz2-0  ONLINE
            da44p2  ONLINE
            da37p2  ONLINE
            da18p2  ONLINE
            da36p2  ONLINE
            da45p2  ONLINE
            da19p2  ONLINE
            da22p2  ONLINE
            da33p2  ONLINE
            da35p2  ONLINE
            da21p2  ONLINE
            da31p2  ONLINE
            da47p2  ONLINE
            da48p2  ONLINE
            da49p2  ONLINE
        spares
          da46p2

   pool: nas02_gr03
     id: 4878868173820164207
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr03  ONLINE
          raidz2-0  ONLINE
            da81p2  ONLINE
            da71p2  ONLINE
            da14p2  ONLINE
            da15p2  ONLINE
            da80p2  ONLINE
            da16p2  ONLINE
            da88p2  ONLINE
            da17p2  ONLINE
            da40p2  ONLINE
            da41p2  ONLINE
            da25p2  ONLINE
            da42p2  ONLINE
            da24p2  ONLINE
            da43p2  ONLINE
        spares
          da39p2

   pool: nas02_gr02
     id: 3299037437134217744
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr02  ONLINE
          raidz2-0  ONLINE
            da84p2  ONLINE
            da76p2  ONLINE
            da85p2  ONLINE
            da8p2   ONLINE
            da9p2   ONLINE
            da78p2  ONLINE
            da73p2  ONLINE
            da74p2  ONLINE
            da70p2  ONLINE
            da77p2  ONLINE
            da11p2  ONLINE
            da13p2  ONLINE
            da79p2  ONLINE
            da89p2  ONLINE
        spares
          da90p2

   pool: nas02_gr01
     id: 1132383125952900182
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr01  ONLINE
          raidz2-0  ONLINE
            da91p2  ONLINE
            da75p2  ONLINE
            da0p2   ONLINE
            da82p2  ONLINE
            da1p2   ONLINE
            da83p2  ONLINE
            da2p2   ONLINE
            da3p2   ONLINE
            da4p2   ONLINE
            da5p2   ONLINE
            da86p2  ONLINE
            da6p2   ONLINE
            da7p2   ONLINE
            da72p2  ONLINE
        spares
          da87p2



It seems that FreeNAS does ZFS little differently and they create a separate pool for every RAIDZ2 target with dedicated spares. Interesting …

UPDATE 1 – BSD Now 305

The FreeBSD Enterprise 1 PB Storage article was featured in the BSD Now 305 – Changing Face of Unix episode.

Thanks for mentioning!

UPDATE 2 – Real Life Pictures in Data Center

Some of you asked for a real life pictures of this monster. Below you will find several pics taken at the data center.

Front case with cabling.

tyan-real-01.jpg

Alternate front view.

tyan-real-09.jpg

Back of the case with cabling.

tyan-real-02.jpg

Top view with disks.

tyan-real-03

Alternate top view.

tyan-real-07.jpg

Disks slots zoom.

tyan-real-08.jpg

SSD and HDD disks.

tyan-real-06.jpg

EOF

Silent Fanless FreeBSD Server – DIY Backup

I already once wrote about this topic at the Silent Fanless FreeBSD Desktop/Server article. To my pleasant surprise BSD NOW Episode 253: Silence of the Fans featured my article for which I am very grateful. Today I would like to show another practical example of such setup and with more hands on approach along with real power usage measurements with power meter. I also got more power efficient ASRock N3150B-ITX motherboard with only 6W TDP which includes 4-core Celeron N3150 CPU and also nice small Supermicro SC101i Mini ITX case. Keep in mind that ASRock also made very similar N3150-ITX motherboard (no ‘B’ in model name) with different ports/connectors that may better suit your needs better.

You may also check the follow up Silent Fanless FreeBSD Server – Redundant Backup article.

Build

Here is how the Supermicro SC101i case looks like with ASRock N3150B-ITX motherboard installed.

silent-backup-case-external.jpg

silent-backup-case-back.jpg

One thing that surprised me very much was the hard disk cost. The internal Seagate 4TB ST4000LM024 2.5 SATA drive costs about $180-190 but the same disk sold as Maxtor M3 4TB 2.5 disk in external case with Maxtor brand (which is owned by Seagate anyway) and USB 3.0 port costs half of that – about $90-100. At least in Europe/Poland location.

I think you do already know where I am going with my thoughts. I will use an external Maxtor M3 4TB 2.5 drive and connect it via the USB 3.0 port in this setup. While SATA III provides theoretical throughput of 6Gbps the USB 3.0 provides 5Gbps theoretical throughput. The difference can be important for low latency high throughput SSD drives that approach 580MB/s speed but not for traditional rotational disks moving gently at 5400RPM.

The maximum performance I was able to squeeze from this Maxtor M3 4TB 2.5 USB 3.0 drive was 90MB/s write speed and 120MB/s read speed using pv(1) tool, and that was at the beginning of the disk. These speeds will drop to about 70MB/s and 90MB/s at the end of the disk respectively for write and read operations. We are not even approaching SATA I standard here which tops at 1.5Gbps. Thus it will not make a difference or not a significant one for sure for such storage.

At first I wanted to make a hole on the motherboard end steel plate (somewhere beside the back ports) with drill to get outside with USB cable from the case and attach it to one of the USB 3.0 ports at the back of the motherboard but fortunately I got better idea. This motherboard has connector for internal USB 3.0 (so called front panel USB on the case) so I bought Akyga AK-CA-57 front panel cable with USB 3.0 port and connected everything inside the case.

This is the Akyga AK-CA-57 USB 3.0 cable.

silent-backup-usb-akyga-cable-AK-CA-57.jpg

If I was going to install two USB 3.0 disks using this method I would use one of these cables instead:

The only problem can be more physical one – will it blend will it fit? Fortunately I was able to find a way to fit it in the case and there is even space for the second disk. As this will be my offsite backup replacement which is only 3rd stage/offsite backup I do not need to create redundant mirror/RAID1 protection but it’s definitely possible with two Maxtor M3 4TB 2.5 USB 3.0 drives.

The opened Supermicro SC101i case with ASRock N3150B-ITX motherboard inside and attached Pico PSU looks like that.

silent-backup-mobo-case.jpg

With attached Akyga AK-CA-57 USB 3.0 cable things get little narrow, but with proper cable lay you will still be able to fit another internal 2.5 SATA disk or external 2.5 USB 3.0 disk.

silent-backup-mobo-case-blue.jpg

I attached Akyga AK-CA-57 cable to this USB 3.0 connector on the motherboard.

silent-backup-mobo-case-usb.jpg

Case with Maxtor M3 4TB disk. The disk placement required little modifications.

silent-backup-mobo-case-blue-disk.jpg

I created custom disk holders using steel plates I got from window mosquito net set for my home but you should be able to get something similar in any hardware shop. I modified them a little with pliers.

silent-backup-handles

I also ‘silenced’ the disk vibrations with felt stickers.

silent-backup-silence.jpg

The silenced disk in the Supermicro SC101i case.

silent-backup-mobo-case-blue-disk-silence.jpg

Ancestor

Before this setup I used Raspberry Pi 2B with external Western Digital 2TB 2.5 USB 3.0 disk but the storage space requirements become larger so I needed to increase that. It was of course with GELI encryption and ZFS with enabled LZ4 compression on top. The four humble ARM32 cores and soldered 1GB of RAM was able to squeeze whooping 5MB/s read/write experience from this ZFS/GELI setup but that was not hurting me as I used rsync(1) for differential backups and the Internet connection to that box was limited to about 1.5MB/s. I would still use that setup but it just won’t boot with that larger Maxtor M3 4TB disk because it requires more power and I already used stronger 5V 3.1A charger then 5V 2.0A suggested by vendor. Even the safe_mode_gpio=4 and max_usb_current=1 options at /boot/msdos/config.txt did not help.

Cost

The complete setup price tops at $220 total. Here are the parts used.

PRICE  COMPONENT
  $59  CPU/Motherboard ASRock N3150B-ITX Mini-ITX
  $14  RAM Crucial 4GB DDR3L 1.35V
  $13  PSU 12V 7.5A 90W Pico (internal)
   $2  PSU 12V 2.5A 30W Leader Electronics (external)
  $29  Supermicro SC101i (used)
   $3  Akyga AK-CA-57 USB 3.0 Cable
   $3  SanDisk Fit 16GB USB 2.0 Drive (system)
  $95  Maxtor M3 4TB 2.5 USB 3.0 Drive (data)
 $220  TOTAL

PSU

In earlier Silent Fanless FreeBSD Desktop/Server article I used quite large 90W PSU from FSP Group. From the PSUs that I owned only ThinkPad W520/W530 bricks can compete in size with this beast. As this motherboard will use very little power (details lower) it will require a lot smaller PSU. As the FSP Group PSU has IEC C14 slot it also requires additional IEC C13 power cable which makes it even bigger solution. The new 12V 2.5A 30W is very compact and also costs fraction of the 90W FSP Group gojira.

New Leader Electronics PSU label.

silent-backup-psu-ext-label.jpg

Below you can see the comparison for yourself.

silent-backup-psu-compare

I also got cheaper and less powerful Pico PSU which now tops as 12V 7.5A 90W power.

silent-backup-psu-pico-12V-90W.jpg

Power Consumption

This is where it gets really interesting. I measured the power consumption with power meter.

silent-backup-power-meter.jpg

Idle

When this box is booted without any media attached it uses only 7.5W of power idling. While the system was idle with SanDisk 16GB USB 2.0 drive (on which FreeBSD was installed) it used about 8.0W of power. When booted with Maxtor M3 4TB disk inside and SanDisk 16GB USB 2.0 drive attached it run idle at about 8.5W of power.

Load

As I do not need full CPU speed I limited the CPU speed in powerd(8) options to 1.2Ghz. With this limit set the fully loaded system with all 4 cores busy at 100% and two dd(8) processes for read both boot SanDisk 16GB drive and Maxtor M3 4TB disk and with GELI enabled ZFS pool doing scrub operation in progress and additional two find(1) processes for both disks it would not pass the 13.9W barrier. Without CPU limitation (that means Intel Turbo Boost enabled) the system used 16.0W of power at most.

Summary of power usage for this box.

 POWER  TYPE  CONFIGURATION
 7.5 W  IDLE  System
 8.0 W  IDLE  System + SanDisk 16GB drive
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive
13.9 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
16.0 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive

For comparision the Raspberry Pi 2B with 16GB MicroSD card attached used only 1.5W but we all know how slow it is. When used with Western Digital 2TB 2.5 USB 3.0 drive it used about 2.2W at idle state.

Configuration for Low Power Consumption

Below are FreeBSD configuration files used in this box to lower the power consumption.

The /etc/sysctl.conf file.

# ANNOYING THINGS
  vfs.usermount=1
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# LIMIT ZFS ARC EFFICIENTLY
  kern.maxvnodes=32768

# ALLOW UPGRADES IN JAILS
  security.jail.chflags_allowed=1

# ALLOW RAW SOCKETS IN JAILS
  security.jail.param.allow.raw_sockets=1
  security.jail.allow_raw_sockets=1

# RANDOM PID
  kern.randompid=12345

# PERFORMANCE/ALL SHARED MEMORY SEGMENTS WILL BE MAPPED TO UNPAGEABLE RAM 
  kern.ipc.shm_use_phys=1

# MEMORY OVERCOMMIT SEE tuning(7)
  vm.overcommit=2

# NETWORK/DO NOT SEND RST ON SEGMENTS TO CLOSED PORTS
  net.inet.tcp.blackhole=2

# NETWORK/DO NOT SEND PORT UNREACHABLES FOR REFUSED CONNECTS
  net.inet.udp.blackhole=1

# NETWORK/ENABLE SCTP BLACKHOLING blackhole(4) FOR MORE DETAILS
  net.inet.sctp.blackhole=1

# NETWORK/MAX SIZE OF AUTOMATIC RECEIVE BUFFER (2097152) [4x]
  net.inet.tcp.recvbuf_max=8388608

# NETWORK/MAX SIZE OF AUTOMATIC SEND BUFFER (2097152) [4x]
  net.inet.tcp.sendbuf_max=8388608

# NETWORK/MAXIMUM SOCKET BUFFER SIZE (5242880) [3.2x]
  kern.ipc.maxsockbuf=16777216

# NETWORK/MAXIMUM LISTEN SOCKET PENDING CONNECTION ACCEPT QUEUE SIZE (128) [8x]
  kern.ipc.soacceptqueue=1024

# NETWORK/DEFAULT tcp MAXIMUM SEGMENT SIZE (536) [2.7x]
  net.inet.tcp.mssdflt=1460

# NETWORK/MINIMUM TCP MAXIMUM SEGMENT SIZE (216) [6x]
  net.inet.tcp.minmss=1300

# NETWORK/LIMIT ON SYN/ACK RETRANSMISSIONS (3)
  net.inet.tcp.syncache.rexmtlimit=0

# NETWORK/USE TCP SYN COOKIES IF THE SYNCACHE OVERFLOWS (1)
  net.inet.tcp.syncookies=0

# NETWORK/ENABLE TCP SEGMENTATION OFFLOAD (1)
  net.inet.tcp.tso=0

# NETWORK/ENABLE IP OPTIONS PROCESSING ([LS]SRR, RR, TS) (1)
  net.inet.ip.process_options=0

# NETWORK/ASSIGN RANDOM ip_id VALUES (0)
  net.inet.ip.random_id=1

# NETWORK/ENABLE SENDING IP REDIRECTS (1)
  net.inet.ip.redirect=0

# NETWORK/IGNORE ICMP REDIRECTS (0)
  net.inet.icmp.drop_redirect=1

# NETWORK/ASSUME SO_KEEPALIVE ON ALL TCP CONNECTIONS (1)
  net.inet.tcp.always_keepalive=0

# NETWORK/DROP TCP PACKETS WITH SYN+FIN SET (0)
  net.inet.tcp.drop_synfin=1

# NETWORK/RECYCLE CLOSED FIN_WAIT_2 CONNECTIONS FASTER (0)
  net.inet.tcp.fast_finwait2_recycle=1

# NETWORK/CERTAIN ICMP UNREACHABLE MESSAGES MAY ABORT CONNECTIONS IN SYN_SENT (1)
  net.inet.tcp.icmp_may_rst=0

# NETWORK/MAXIMUM SEGMENT LIFETIME (30000) [0.27x]
  net.inet.tcp.msl=8192

# NETWORK/ENABLE PATH MTU DISCOVERY (1)
  net.inet.tcp.path_mtu_discovery=0

# NETWORK/EXPIRE TIME OF TCP HOSTCACHE ENTRIES (3600) [2x]
  net.inet.tcp.hostcache.expire=7200

# NETWORK/TIME BEFORE DELAYED ACK IS SENT (100) [0.2x]
  net.inet.tcp.delacktime=20

The /boot/loader.conf file.

# BOOT OPTIONS
  autoboot_delay=1
  boot_mute=YES

# MODULES FOR BOOT
  zfs_load=YES

# DISABLE HYPER THREADING
  machdep.hyperthreading_allowed=0

# REDUCE NUMBER OF SOUND GENERATED INTERRUPTS
  hw.snd.latency=7

# RACCT/RCTL RESOURCE LIMITS
  kern.racct.enable=1

# PIPE KVA LIMIT | 320 MB
  kern.ipc.maxpipekva=335544320

# NUMBER OF SEGMENTS PER PROCESS
  kern.ipc.shmseg=1024

# LARGE PAGE MAPPINGS
  vm.pmap.pg_ps_enabled=1

# SHARED MEMORY
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

# ZFS TUNING
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=32M
  vfs.zfs.arc_max=128M
  vfs.zfs.txg.timeout=1

# NETWORK MAX SEND QUEUE SIZE
  net.link.ifqmaxlen=2048

# POWER OFF DEVICES WITHOUT ATTACHED DRIVER
  hw.pci.do_power_nodriver=3

# AHCI POWER MANAGEMENT FOR EVERY USED CHANNEL (ahcich 0-7)
  hint.ahcich.0.pm_level=5
  hint.ahcich.1.pm_level=5
  hint.ahcich.2.pm_level=5
  hint.ahcich.3.pm_level=5
  hint.ahcich.4.pm_level=5
  hint.ahcich.5.pm_level=5
  hint.ahcich.6.pm_level=5
  hint.ahcich.7.pm_level=5

# GELI THREADS
  kern.geom.eli.threads=2
  kern.geom.eli.batch=1

The /etc/rc.conf file.

# NETWORK
  hostname=offsite.local
  background_dhclient=YES
  extra_netfs_types=NFS
  defaultroute_delay=3
  defaultroute_carrier_delay=3

# MODULES/COMMON/BASE
  kld_list="${kld_list} aesni geom_eli"
  kld_list="${kld_list} fuse coretemp sem cpuctl ichsmb cc_htcp"
  kld_list="${kld_list} libiconv cd9660_iconv msdosfs_iconv udf_iconv"

# POWER
  performance_cx_lowest=C1
  economy_cx_lowest=Cmax
  powerd_enable=YES
  powerd_flags="-n adaptive -a hiadaptive -b adaptive -m 400 -M 1200"

# DAEMONS | yes
  zfs_enable=YES
  nfs_client_enable=YES
  syslogd_flags='-s -s'
  sshd_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# FS
  fsck_y_enable=YES
  clear_tmp_enable=YES
  clear_tmp_X=YES
  growfs_enable=YES

# OTHER
  keyrate=fast
  font8x14=vgarom-8x14
  virecover_enable=NO
  update_motd=NO
  devfs_system_ruleset=desktop
  hostid_enable=NO

USB Boot Drive

I was not sure if I should use USB 2.0 drive or USB 3.0 drive for FreeBSD system so I got both versions from SanDisk and tested their performance with pv(1) and diskinfo(8) tools. The pv(1) utility had options enabled shown below and for diskinfo(8) the -c and -i parameters were used.

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

The dmesg(8) information for the SanDisk Fit USB 2.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530001100609104091
da0: 40.000MB/s transfers
da0: 15060MB (30842880 512 byte sectors)
da0: quirks=0x2

The dmesg(8) information for the SanDisk Fit USB 3.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530 001070202100093
da0: 40.000MB/s transfers
da0: 14663MB (30031250 512 byte sectors)
da0: quirks=0x2

There is also noticeable size difference as the USB 2.0 version has additional 400 MB of space!

By the way … the SanDisk Fit USB 3.0 16GB came with this sticker inside the box – a serial number for the RescuePRO Deluxe software – which I will never use. Not because its bad or something but because I have no such needs. You may take it … of course unless someone else did not took it already πŸ™‚

silent-backup-license.jpg

Below are the results of the benchmarks, I tested them in both USB 2.0 and USB 3.0 ports.


                   DRIVE  USB  pv/READ  pv/WRITE  diskinfo/OVERHEAD  diskinfo/IOPS
SanDisk Fit USB 2.0 16GB  2.0   29MB/s     5MB/s   0.712msec/sector           2521
SanDisk Fit USB 2.0 16GB  3.0   33MB/s     5MB/s   0.799msec/sector           2441
SanDisk Fit USB 3.0 16GB  2.0   35MB/s     9MB/s   0.618msec/sector           1920
SanDisk Fit USB 3.0 16GB  3.0   91MB/s    11MB/s   0.567msec/sector           1588

What is also interesting is that while USB 2.0 version has lower throughput it has more IOPS then the newer USB 3.0 incarnation of the SanDisk Fit drive. I also did other more real life test. I checked how long would it take to boot FreeBSD system installed on each of them from the loader(8) screen to the login: prompt. The difference is 5 seconds. Details are shown below.

 TIME  DRIVE
  28s  SanDisk Fit USB 3.0 16GB
  33s  SanDisk Fit USB 2.0 16GB

With such small ~15% difference I will use SanDisk Fit USB 2.0 16GB as it sticks out little less outside from the slot as shown below.

silent-backup-usb-drives.jpg

Cloud Storage Prices Comparison

The Tarsnap“online backups for the truly paranoid” – costs $0.25/GB/month. The price in Tarsnap is for data transmitted after deduplication and compression but that does not change much here. For my data the compressratio property from ZFS dataset is at 3% (1.03). When I estimate deduplication savings with zdb -S pool command I get additional 1% of the savings (1.01). Lets assume that with both deduplication and compression it would take 5% (1.05) savings. That would lower the Tarsnap price to $0.2375/GB/month.

The Backblaze B2 Cloud Storage – storage costs $0.005/GB/month.

Our single 4TB disk solution costs $230 for lets say 3 years. You can expect disk failure after that period but it may serve you as well for another 3 years. Now as we know the cloud storage prices lets calculate price for 4TB data stored for 3 years in these cloud services.

Self Solution Electricity Cost

We also need to calculate how much energy our build solution would consume. Currently 1kWh of power costs about $0.20 in Europe/Poland (rounded up). This means that running computer with 1000W power usage for 1 hour would cost you $0.20 on electricity bill. Our solution idles at 8.5W and uses 13.9W when fully loaded. It will be idle for most of the time so I will assume that it will use 10W on average here. That would cost us $0.002 for 10W device running for 1 hour.

Below you will also find calculations for 1 day (24x multiplier), 1 year (another 365.25x multiplier) and 3 years (another 3x multiplier).

  COST  TIME
$0.002  1 HOUR
$0.048  1 DAY
$17.53  1 YEAR
$52.60  3 YEARS

Our total 3 years electricity cost is $282.60 for building and then running the system non-stop. We can also implement features like Wake On LAN to limit that power usage even more for example.

Here are these cloud storage service providers prices.


PROVIDER     PRICE  DATA  TIME
Tarsnap    $0.2375   1GB  1 Month
Backblaze  $0.0050   1GB  1 Month

The price for 1 month of keeping 4TB of data on these providers looks as follows.


PROVIDER   PRICE  DATA  TIME
Tarsnap     $973   4TB  1 Month
Backblaze    $20   4TB  1 Month

For just 1 month the Tarsnap is 4 TIMES more expensive the keeping the backup on your self computer with 4TB disk. The Backblaze service is at 1/10 cost which is still reasonable.

Lets compare prices for 3 years of 4TB storage.


PROVIDER    PRICE  DATA  TIME
Tarsnap    $35021   4TB  3 Years
Backblaze    $737   4TB  3 Years

After 3 years the Backblaze solutions is about 2.5 TIMES more expensive then our personal setup, but if you really do not want to create your solution the difference for 3 years is not that big. The Tarsnap is out of bounds here being more then 120 TIMES more expensive then self hosted solution. Remember that I also did not included costs for transferring the data into or from the cloud storage. That would make cloud storage costs even bigger depending how often you would want to pull/push your data.

EOF

IBM TSM (Spectrum Protect) on Veritas Cluster Server

Until today I mostly shared articles about free and open systems. Now its time to share so called enterprise experience πŸ™‚ Not so long ago I made a IBM TSM instance as highly available service on Symantec Veritas Cluster Server.

ibm-tsm-logo.png

If you prefer to use open and free backup solution then check Bareos Backup Server on FreeBSD article.

The IBM TSM (Tivoli Storage Manager) has been rebranded by IBM into IBM Spectrum Protect and in the similar period of time Symantec moved Veritas Cluster Server info InfoScale Availability while creating separate/dedicated Veritas company for this purpose.

The instructions I want to share today are for sure the same for latest versions of Veritas Cluster Server and its later InfoScale Availability incarnations and latest IBM Spectrum Protect 8.1 family introduction was mostly related to rebranding/cleaning of the whole Spectrum Protect/TSM modules and additions, so they all will have common 8.1 label. As these instructions were made for IBM TSM (Spectrum Protect) 7.1.6 version they should still be very similar for current versions.

This highly available IBM TSM instance is part of the whole Backup Consolidation project which uses two physical servers to server both this IBM TSM service and Dell/EMC Networker backup server. When everything is OK then one of the nodes is dedicated to IBM TSM and the other one is used by Dell/EMC Networker, so all physical resources are well saturated and we do not ‘waste’ whole node to wait for 99% of the time empty for the first node to crash. Of course if first node misbehaves or has a hardware failure, then both IBM TSM and Dell/EMC Networker run nicely on single node. It is also very convenient for various maintenance tasks, to be able to switch all services to other node and and work in peace on the first one, but I do not have to tell you that. The third and last service is shared between these two Oracle RMAN Catalog for the Oracle databases metadata information – also for backup/restore purposes.

I will not write here instructions to install the operating system (we use amd64 RHEL 6.x here) or to setup the Veritas Cluster Server as I installed it earlier and its quite simple to set it up. These instructions focus on creating IBM TSM highly available service along using/allocating the resources from the IBM Storwize V5030 storage array where 400 GB SSD disks are dedicated for IBM TSM DB2 database instance and 1.8 TB 10K SAS disks are dedicated for DRAID groups that will be serving space for IBM TSM storage pools implemented in latest IBM TSM container pools with deduplication and compression enabled. The head of IBM Storwize V5030 storage array is shown below.

ibm-tsm-v5030-photo.jpg

Each node is IBM System x3650 M4 server with two dual-port 8Gb FC cards and one dual-port 10GE cards … along with builtin 1GE cards for Veritas Cluster Server heartbeats. Each has 192 GB RAM and dual 6-core CPUs @ 3.5 GHz each which translates to 12 physical cores or 24 HTT threads per node. The three internal SSD drives are used for the system only in RAID1 + SPARE configuration. All clustered resources are from IBM Storwize V5030 FC/SAN storage array. The operating system installed on these nodes is amd64 RHEL 6.x and the Veritas Cluster Server is at 6.2.x version. The IBM System x3650 M4 server is shown below.

ibm-tsm-x3650-m4.jpg

All of the setting/tuning/decisions were made based on the IBM TSM documentation and great IBM Spectrum Protect Blueprints resources from the valuable IBM developerWorks wiki.

Storage Array Setup

First we need to create MDISKS. We used DRAID with double parity protection + spare for each MDISK with 17 SAS 1.8 TB 10K disks each. That gives 14 disks for data 2 for parity and 1 spare from which all provide I/O thanks to DRAID setup. We have three such MDISKs with ~21.7 TB each for the total 65.1 TB for IBM TSM containers. Of course all these 3 ‘pool’ MDISKs are in one Storage Group. The LUNs for the IBM TSM DB2 database were 5 SSD 400 GB disks setup in a DRAID disk with 1 parity and 1 spare disk. This gives 3 disks for data 1 for parity and 1 for spare space. This gives about 1.1 TB for the IBM TSM DB2 database.

Here are LUNs created from these MDISKs.

ibm-tsm-v5030.png

I needed to remove some names of course πŸ™‚

LUNs Initialization

Veritas Service Cluster needs to have storage prepared with disk groups which are similar in concept (but more powerful) then LVM. Below are instructions to first detect and then initialize these LUNs from IBM Storwize V5030 storage array. I marked them in blue for more clarity.

[root@300 ~]# haconf -makerw
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:none       -            -            online invalid
storwizev70000_00001b auto:none       -            -            online invalid
storwizev70000_00001c auto:none       -            -            online invalid
storwizev70000_00001d auto:none       -            -            online invalid
storwizev70000_00001e auto:none       -            -            online invalid
storwizev70000_00001f auto:none       -            -            online invalid
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:none       -            -            online invalid
storwizev70000_000013 auto:none       -            -            online invalid
storwizev70000_000014 auto:none       -            -            online invalid
storwizev70000_000015 auto:none       -            -            online invalid
storwizev70000_000016 auto:none       -            -            online invalid
storwizev70000_000017 auto:none       -            -            online invalid
storwizev70000_000018 auto:none       -            -            online invalid
storwizev70000_000019 auto:none       -            -            online invalid
storwizev70000_000020 auto:none       -            -            online invalid
[root@300 ~]# vxdisksetup -i storwizev70000_00001a
[root@300 ~]# vxdisksetup -i storwizev70000_00001b
[root@300 ~]# vxdisksetup -i storwizev70000_00001c
[root@300 ~]# vxdisksetup -i storwizev70000_00001d
[root@300 ~]# vxdisksetup -i storwizev70000_00001e
[root@300 ~]# vxdisksetup -i storwizev70000_00001f
[root@300 ~]# vxdisksetup -i storwizev70000_000012
[root@300 ~]# vxdisksetup -i storwizev70000_000013
[root@300 ~]# vxdisksetup -i storwizev70000_000014
[root@300 ~]# vxdisksetup -i storwizev70000_000015
[root@300 ~]# vxdisksetup -i storwizev70000_000016
[root@300 ~]# vxdisksetup -i storwizev70000_000017
[root@300 ~]# vxdisksetup -i storwizev70000_000018
[root@300 ~]# vxdisksetup -i storwizev70000_000019
[root@300 ~]# vxdisksetup -i storwizev70000_000020
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    -            -            online
storwizev70000_00001b auto:cdsdisk    -            -            online
storwizev70000_00001c auto:cdsdisk    -            -            online
storwizev70000_00001d auto:cdsdisk    -            -            online
storwizev70000_00001e auto:cdsdisk    -            -            online
storwizev70000_00001f auto:cdsdisk    -            -            online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    -            -            online
storwizev70000_000013 auto:cdsdisk    -            -            online
storwizev70000_000014 auto:cdsdisk    -            -            online
storwizev70000_000015 auto:cdsdisk    -            -            online
storwizev70000_000016 auto:cdsdisk    -            -            online
storwizev70000_000017 auto:cdsdisk    -            -            online
storwizev70000_000018 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000020 auto:cdsdisk    -            -            online
[root@300 ~]# vxdg init TSM0_dg \
                stgFC_020=storwizev70000_000020 \
                stgFC_012=storwizev70000_000012 \
                stgFC_016=storwizev70000_000016 \
                stgFC_013=storwizev70000_000013 \
                stgFC_014=storwizev70000_000014 \
                stgFC_015=storwizev70000_000015 \
                stgFC_017=storwizev70000_000017 \
                stgFC_018=storwizev70000_000018 \
                stgFC_019=storwizev70000_000019 \
                stgFC_01A=storwizev70000_00001a \
                stgFC_01B=storwizev70000_00001b \
                stgFC_01C=storwizev70000_00001c \
                stgFC_01D=storwizev70000_00001d \
                stgFC_01E=storwizev70000_00001e \
                stgFC_01F=storwizev70000_00001f
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    stgFC_01A    TSM0_dg      online
storwizev70000_00001b auto:cdsdisk    stgFC_01B    TSM0_dg      online
storwizev70000_00001c auto:cdsdisk    stgFC_01C    TSM0_dg      online
storwizev70000_00001d auto:cdsdisk    stgFC_01D    TSM0_dg      online
storwizev70000_00001e auto:cdsdisk    stgFC_01E    TSM0_dg      online
storwizev70000_00001f auto:cdsdisk    stgFC_01F    TSM0_dg      online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    stgFC_012    TSM0_dg      online
storwizev70000_000013 auto:cdsdisk    stgFC_013    TSM0_dg      online
storwizev70000_000014 auto:cdsdisk    stgFC_014    TSM0_dg      online
storwizev70000_000015 auto:cdsdisk    stgFC_015    TSM0_dg      online
storwizev70000_000016 auto:cdsdisk    stgFC_016    TSM0_dg      online
storwizev70000_000017 auto:cdsdisk    stgFC_017    TSM0_dg      online
storwizev70000_000018 auto:cdsdisk    stgFC_018    TSM0_dg      online
storwizev70000_000019 auto:cdsdisk    stgFC_019    TSM0_dg      online
storwizev70000_000020 auto:cdsdisk    stgFC_020    TSM0_dg      online
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_instance     maxsize=32G   stgFC_020
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_active_log   maxsize=128G  stgFC_012
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_archive_log  maxsize=384G  stgFC_016
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_01        maxsize=300G  stgFC_013
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_02        maxsize=300G  stgFC_014
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_03        maxsize=300G  stgFC_015
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_01 maxsize=900G  stgFC_017
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_02 maxsize=900G  stgFC_018
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_03 maxsize=900G  stgFC_019
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_01     maxsize=6700G stgFC_01A
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_02     maxsize=6700G stgFC_01B
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_03     maxsize=6700G stgFC_01C
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_04     maxsize=6700G stgFC_01D
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_05     maxsize=6700G stgFC_01E
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_06     maxsize=6700G stgFC_01F
[root@300 ~]# vxprint -u h | grep ^sd | column -t
sd  stgFC_00B-01  NSR_vol_index-02          ENABLED  399.95g  0.00  -  -  -
sd  stgFC_00C-01  NSR_vol_media-02          ENABLED  9.96g    0.00  -  -  -
sd  stgFC_00D-01  NSR_vol_nsr-02            ENABLED  79.96g   0.00  -  -  -
sd  stgFC_00E-01  NSR_vol_res-02            ENABLED  9.96g    0.00  -  -  -
sd  stgFC_012-01  TSM0_vol_active_log-01    ENABLED  127.96g  0.00  -  -  -
sd  stgFC_016-01  TSM0_vol_archive_log-01   ENABLED  383.95g  0.00  -  -  -
sd  stgFC_017-01  TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_018-01  TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_019-01  TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_013-01  TSM0_vol_db_01-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_014-01  TSM0_vol_db_02-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_015-01  TSM0_vol_db_03-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_020-01  TSM0_vol_instance-01      ENABLED  31.96g   0.00  -  -  -
sd  stgFC_01A-01  TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01B-01  TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01C-01  TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01D-01  TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01E-01  TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01F-01  TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00  -  -  -
[root@300 ~]# vxprint -u h -g TSM0_dg | column -t
TY  NAME                      ASSOC                     KSTATE   LENGTH   PLOFFS  STATE   TUTIL0  PUTIL0
dg  TSM0_dg                   TSM0_dg                   -        -        -       -       -       -
dm  stgFC_01A                 storwizev70000_00001a     -        6.54t    -       -       -       -
dm  stgFC_01B                 storwizev70000_00001b     -        6.54t    -       -       -       -
dm  stgFC_01C                 storwizev70000_00001c     -        6.54t    -       -       -       -
dm  stgFC_01D                 storwizev70000_00001d     -        6.54t    -       -       -       -
dm  stgFC_01E                 storwizev70000_00001e     -        6.54t    -       -       -       -
dm  stgFC_01F                 storwizev70000_00001f     -        6.54t    -       -       -       -
dm  stgFC_012                 storwizev70000_000012     -        127.96g  -       -       -       -
dm  stgFC_013                 storwizev70000_000013     -        299.95g  -       -       -       -
dm  stgFC_014                 storwizev70000_000014     -        299.95g  -       -       -       -
dm  stgFC_015                 storwizev70000_000015     -        299.95g  -       -       -       -
dm  stgFC_016                 storwizev70000_000016     -        383.95g  -       -       -       -
dm  stgFC_017                 storwizev70000_000017     -        899.93g  -       -       -       -
dm  stgFC_018                 storwizev70000_000018     -        899.93g  -       -       -       -
dm  stgFC_019                 storwizev70000_000019     -        899.93g  -       -       -       -
dm  stgFC_020                 storwizev70000_000020     -        31.96g   -       -       -       -

v   TSM0_vol_active_log       fsgen                     ENABLED  127.96g  -       ACTIVE  -       -
pl  TSM0_vol_active_log-01    TSM0_vol_active_log       ENABLED  127.96g  -       ACTIVE  -       -
sd  stgFC_012-01              TSM0_vol_active_log-01    ENABLED  127.96g  0.00    -       -       -

v   TSM0_vol_archive_log      fsgen                     ENABLED  383.95g  -       ACTIVE  -       -
pl  TSM0_vol_archive_log-01   TSM0_vol_archive_log      ENABLED  383.95g  -       ACTIVE  -       -
sd  stgFC_016-01              TSM0_vol_archive_log-01   ENABLED  383.95g  0.00    -       -       -

v   TSM0_vol_db_backup_01     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_01-01  TSM0_vol_db_backup_01     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_017-01              TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_02     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_02-01  TSM0_vol_db_backup_02     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_018-01              TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_03     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_03-01  TSM0_vol_db_backup_03     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_019-01              TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_01            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_01-01         TSM0_vol_db_01            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_013-01              TSM0_vol_db_01-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_02            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_02-01         TSM0_vol_db_02            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_014-01              TSM0_vol_db_02-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_03            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_03-01         TSM0_vol_db_03            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_015-01              TSM0_vol_db_03-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_instance         fsgen                     ENABLED  31.96g   -       ACTIVE  -       -
pl  TSM0_vol_instance-01      TSM0_vol_instance         ENABLED  31.96g   -       ACTIVE  -       -
sd  stgFC_020-01              TSM0_vol_instance-01      ENABLED  31.96g   0.00    -       -       -

v   TSM0_vol_pool0_01         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_01-01      TSM0_vol_pool0_01         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01A-01              TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_02         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_02-01      TSM0_vol_pool0_02         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01B-01              TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_03         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_03-01      TSM0_vol_pool0_03         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01C-01              TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_04         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_04-01      TSM0_vol_pool0_04         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01D-01              TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_05         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_05-01      TSM0_vol_pool0_05         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01E-01              TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_06         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_06-01      TSM0_vol_pool0_06         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01F-01              TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00    -       -       -
[root@300 ~]# vxinfo -p -g TSM0_dg | column -t
vol   TSM0_vol_instance         fsgen   Started
plex  TSM0_vol_instance-01      ACTIVE
vol   TSM0_vol_active_log       fsgen   Started
plex  TSM0_vol_active_log-01    ACTIVE
vol   TSM0_vol_archive_log      fsgen   Started
plex  TSM0_vol_archive_log-01   ACTIVE
vol   TSM0_vol_db_01            fsgen   Started
plex  TSM0_vol_db_01-01         ACTIVE
vol   TSM0_vol_db_02            fsgen   Started
plex  TSM0_vol_db_02-01         ACTIVE
vol   TSM0_vol_db_03            fsgen   Started
plex  TSM0_vol_db_03-01         ACTIVE
vol   TSM0_vol_db_backup_01     fsgen   Started
plex  TSM0_vol_db_backup_01-01  ACTIVE
vol   TSM0_vol_db_backup_02     fsgen   Started
plex  TSM0_vol_db_backup_02-01  ACTIVE
vol   TSM0_vol_db_backup_03     fsgen   Started
plex  TSM0_vol_db_backup_03-01  ACTIVE
vol   TSM0_vol_pool0_01         fsgen   Started
plex  TSM0_vol_pool0_01-01      ACTIVE
vol   TSM0_vol_pool0_02         fsgen   Started
plex  TSM0_vol_pool0_02-01      ACTIVE
vol   TSM0_vol_pool0_03         fsgen   Started
plex  TSM0_vol_pool0_03-01      ACTIVE
vol   TSM0_vol_pool0_04         fsgen   Started
plex  TSM0_vol_pool0_04-01      ACTIVE
vol   TSM0_vol_pool0_05         fsgen   Started
plex  TSM0_vol_pool0_05-01      ACTIVE
vol   TSM0_vol_pool0_06         fsgen   Started
plex  TSM0_vol_pool0_06-01      ACTIVE
[root@300 ~]# find /dev/vx/dsk -name TSM0_\*
/dev/vx/dsk/TSM0_dg
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_06     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_05     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_04     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_03     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_02     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_01     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_03        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_02        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_01        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_archive_log  &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_active_log   &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_instance     &

[root@300 ~]# haconf -dump -makero

Veritas Cluster Server Group

Now as we have LUNs initialized into Disk Group we may now create the cluster service.

[root@300 ~]# haconf -makerw
[root@300 ~]# hagrp -add TSM0_site
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
[root@300 ~]# hagrp -modify TSM0_site SystemList 300 0 301 1
[root@300 ~]# hagrp -modify TSM0_site AutoStartList 300 301
[root@300 ~]# hagrp -modify TSM0_site Parallel 0
[root@300 ~]# hares -add    TSM0_nic_bond0 NIC TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_nic_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_nic_bond0 PingOptimize 1
[root@300 ~]# hares -modify TSM0_nic_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_nic_bond0 Enabled 1
[root@300 ~]# hares -probe  TSM0_nic_bond0 -sys 301
[root@300 ~]# hares -add    TSM0_ip_bond0 IP TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_ip_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_ip_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_ip_bond0 Address 10.20.30.44
[root@300 ~]# hares -modify TSM0_ip_bond0 NetMask 255.255.255.0
[root@300 ~]# hares -modify TSM0_ip_bond0 Enabled 1
[root@300 ~]# hares -link   TSM0_ip_bond0 TSM0_nic_bond0
[root@300 ~]# hares -add    TSM0_dg DiskGroup TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_dg Critical 1
[root@300 ~]# hares -modify TSM0_dg DiskGroup TSM0_dg
[root@300 ~]# hares -modify TSM0_dg Enabled 1
[root@300 ~]# hares -probe  TSM0_dg -sys 301
[root@300 ~]# mkdir /tsm0
[root@301 ~]# mkdir /tsm0

I did not wanted to type all these over and over again so I generated these commands as shown below.

[LOCAL] % cat > LIST << __EOF
stgFC_020    32  /tsm0                         TSM0_vol_instance      TSM0_mnt_instance
stgFC_012   128  /tsm0/active_log              TSM0_vol_active_log    TSM0_mnt_active_log
stgFC_016   384  /tsm0/archive_log             TSM0_vol_archive_log   TSM0_mnt_archive_log
stgFC_013   300  /tsm0/db/db_01                TSM0_vol_db_01         TSM0_mnt_db_01
stgFC_014   300  /tsm0/db/db_02                TSM0_vol_db_02         TSM0_mnt_db_02
stgFC_015   300  /tsm0/db/db_03                TSM0_vol_db_03         TSM0_mnt_db_03
stgFC_017   900  /tsm0/db_backup/db_backup_01  TSM0_vol_db_backup_01  TSM0_mnt_db_backup_01
stgFC_018   900  /tsm0/db_backup/db_backup_02  TSM0_vol_db_backup_02  TSM0_mnt_db_backup_02
stgFC_019   900  /tsm0/db_backup/db_backup_03  TSM0_vol_db_backup_03  TSM0_mnt_db_backup_03
stgFC_01A  6700  /tsm0/pool0/pool0_01          TSM0_vol_pool0_01      TSM0_mnt_pool0_01
stgFC_01B  6700  /tsm0/pool0/pool0_02          TSM0_vol_pool0_02      TSM0_mnt_pool0_02
stgFC_01C  6700  /tsm0/pool0/pool0_03          TSM0_vol_pool0_03      TSM0_mnt_pool0_03
stgFC_01D  6700  /tsm0/pool0/pool0_04          TSM0_vol_pool0_04      TSM0_mnt_pool0_04
stgFC_01E  6700  /tsm0/pool0/pool0_05          TSM0_vol_pool0_05      TSM0_mnt_pool0_05
stgFC_01F  6700  /tsm0/pool0/pool0_06          TSM0_vol_pool0_06      TSM0_mnt_pool0_06
__EOF
[LOCAL]# cat LIST \
  | while read STG SIZE MNTPOINT VOL MNTNAME
    do
      echo sleep 0.2; echo hares -add    ${MNTNAME} Mount TSM0_site
      echo sleep 0.2; echo hares -modify ${MNTNAME} Critical 1
      echo sleep 0.2; echo hares -modify ${MNTNAME} SnapUmount 0
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountPoint ${MNTPOINT}
      echo sleep 0.2; echo hares -modify ${MNTNAME} BlockDevice /dev/vx/dsk/TSM0_dg/${VOL}
      echo sleep 0.2; echo hares -modify ${MNTNAME} FSType vxfs
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountOpt largefiles
      echo sleep 0.2; echo hares -modify ${MNTNAME} FsckOpt %-y
      echo sleep 0.2; echo hares -modify ${MNTNAME} Enabled 1
      echo sleep 0.2; echo hares -probe  ${MNTNAME} -sys 301
      echo sleep 0.2; echo hares -link   ${MNTNAME} TSM0_dg
      echo
    done
[root@300 ~]# hares -add    TSM0_mnt_instance Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_instance Critical 1
[root@300 ~]# hares -modify TSM0_mnt_instance SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_instance MountPoint /tsm0
[root@300 ~]# hares -modify TSM0_mnt_instance BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# hares -modify TSM0_mnt_instance FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_instance MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_instance FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_instance Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_instance -sys 301
[root@300 ~]# hares -link   TSM0_mnt_instance TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_active_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_active_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_active_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_active_log MountPoint /tsm0/active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_active_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_active_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_active_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_active_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_active_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_archive_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_archive_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_archive_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountPoint /tsm0/archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_archive_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_archive_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_archive_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_archive_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountPoint /tsm0/db/db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountPoint /tsm0/db/db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountPoint /tsm0/db/db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountPoint /tsm0/db_backup/db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountPoint /tsm0/db_backup/db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountPoint /tsm0/db_backup/db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountPoint /tsm0/pool0/pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountPoint /tsm0/pool0/pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountPoint /tsm0/pool0/pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_04 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountPoint /tsm0/pool0/pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_04 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_04 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_05 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountPoint /tsm0/pool0/pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_05 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_05 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_06 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountPoint /tsm0/pool0/pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_06 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_06 TSM0_dg
[root@300 ~]# hares -state | grep TSM0 | grep _mnt_ | \
                while read I; do hares -display $I 2>&1 | grep -v ArgListValues | grep 'largefiles'; done | column -t
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
[root@300 ~]# hares -add    TSM0_server Application TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_server StartProgram   "/etc/init.d/tsm0 start"
[root@300 ~]# hares -modify TSM0_server StopProgram    "/etc/init.d/tsm0 stop"
[root@300 ~]# hares -modify TSM0_server MonitorProgram "/etc/init.d/tsm0 status"
[root@300 ~]# hares -modify TSM0_server Enabled 1
[root@300 ~]# hares -probe  TSM0_server -sys 301
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_active_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_archive_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_04
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_05
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_06
[root@300 ~]# hares -link   TSM0_server           TSM0_ip_bond0
[root@300 ~]# hares -link   TSM0_mnt_active_log   TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_archive_log  TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_01        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_02        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_03        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_01     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_02     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_03     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_04     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_05     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_06     TSM0_mnt_instance
[root@300 ~]# vxdg import TSM0_dg
[root@300 ~]# mount -t vxfs /dev/vx/dsk/TSM0_dg/TSM0_vol_instance /tsm0
[root@301 ~]# mkdir -p /tsm0/active_log
[root@301 ~]# mkdir -p /tsm0/archive_log
[root@300 ~]# mkdir -p /tsm0/db/db_01
[root@300 ~]# mkdir -p /tsm0/db/db_02
[root@300 ~]# mkdir -p /tsm0/db/db_03
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_01
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_02
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_01
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_02
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_04
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_05
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_06
[root@300 ~]# find /tsm0
/tsm0
/tsm0/lost+found
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# umount /tsm0
[root@300 ~]# vxdg deport TSM0_dg
[root@300 ~]# haconf -dump -makero
[root@300 ~]# grep TSM0_server /etc/VRTSvcs/conf/config/main.cf
        Application TSM0_server (
        TSM0_server requires TSM0_ip_bond0
        TSM0_server requires TSM0_mnt_active_log
        TSM0_server requires TSM0_mnt_archive_log
        TSM0_server requires TSM0_mnt_db_01
        TSM0_server requires TSM0_mnt_db_02
        TSM0_server requires TSM0_mnt_db_03
        TSM0_server requires TSM0_mnt_db_backup_01
        TSM0_server requires TSM0_mnt_db_backup_02
        TSM0_server requires TSM0_mnt_db_backup_03
        TSM0_server requires TSM0_mnt_instance
        TSM0_server requires TSM0_mnt_pool0_01
        TSM0_server requires TSM0_mnt_pool0_02
        TSM0_server requires TSM0_mnt_pool0_03
        TSM0_server requires TSM0_mnt_pool0_04
        TSM0_server requires TSM0_mnt_pool0_05
        TSM0_server requires TSM0_mnt_pool0_06
        //      Application TSM0_server

Local Per Node Resources

[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# cat /etc/fstab
/dev/mapper/vg_local-lv_root              /           ext3 rw,noatime,nodiratime      1 1
UUID=28d0988a-e6d7-48d8-b0e5-0f70f8eb681e /boot       ext3 defaults                   1 2
UUID=D401-661A                            /boot/efi   vfat umask=0077,shortname=winnt 0 0
/dev/vg_local/lv_swap                     swap        swap defaults                   0 0
/dev/vg_local/lv_tmp                      /tmp        ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_opt_tivoli               /opt/tivoli ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_home                     /home       ext3 rw,noatime,nodiratime      2 2

# VIRT
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Install IBM TSM Server Dependencies.

[root@ANY ~]# yum install numactl
[root@ANY ~]# yum install /usr/lib/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install /usr/lib64/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install xorg-x11-xauth xterm fontconfig libICE \
                          libX11-common libXau libXmu libSM libX11 libXt

System /etc/sysctl.conf parameters for both nodes.

[root@300 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0
[root@301 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0

Install IBM TSM Server

Connect to each node with SSH Forwarding enabled and install IBM TSM server.

[root@300 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./install.sh

… and the second node.

[root@301 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./install.sh

Options choosen during installation.

INSTALL | DESELECT 'Languages' and DESELECT 'Operations Center'
INSTALL | /opt/tivoli/IBM/IBMIMShared
INSTALL | /opt/tivoli/IBM/InstallationManager/eclipse
INSTALL | /opt/tivoli/tsm

Screenshots from the installation process.

ibm-tsm-install-01

ibm-tsm-install-02

ibm-tsm-install-03

ibm-tsm-install-04

ibm-tsm-install-05

ibm-tsm-install-06

Install IBM TSM Client

[root@300 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm
[root@301 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm

Nodes Configuration for IBM TSM Server

[root@300 ~]# useradd -u 1500 -m tsm0
[root@301 ~]# useradd -u 1500 -m tsm0
[root@300 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@301 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@300 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash

[root@301 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash
[root@300 ~]# tail -1 /etc/group
tsm0:x:1500:

[root@301 ~]# tail -1 /etc/group
tsm0:x:1500:
[root@300 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768

[root@301 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768
[root@300 ~]# :> /var/run/dsmserv_tsm0.pid
[root@301 ~]# :> /var/run/dsmserv_tsm0.pid
[root@300 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@301 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@300 ~]# hares -state | grep TSM
TSM0_dg               State                 300  OFFLINE
TSM0_dg               State                 301  OFFLINE
TSM0_ip_bond0         State                 300  OFFLINE
TSM0_ip_bond0         State                 301  OFFLINE
TSM0_mnt_active_log   State                 300  OFFLINE
TSM0_mnt_active_log   State                 301  OFFLINE
TSM0_mnt_archive_log  State                 300  OFFLINE
TSM0_mnt_archive_log  State                 301  OFFLINE
TSM0_mnt_db_01        State                 300  OFFLINE
TSM0_mnt_db_01        State                 301  OFFLINE
TSM0_mnt_db_02        State                 300  OFFLINE
TSM0_mnt_db_02        State                 301  OFFLINE
TSM0_mnt_db_03        State                 300  OFFLINE
TSM0_mnt_db_03        State                 301  OFFLINE
TSM0_mnt_db_backup_01 State                 300  OFFLINE
TSM0_mnt_db_backup_01 State                 301  OFFLINE
TSM0_mnt_db_backup_02 State                 300  OFFLINE
TSM0_mnt_db_backup_02 State                 301  OFFLINE
TSM0_mnt_db_backup_03 State                 300  OFFLINE
TSM0_mnt_db_backup_03 State                 301  OFFLINE
TSM0_mnt_instance     State                 300  OFFLINE
TSM0_mnt_instance     State                 301  OFFLINE
TSM0_mnt_pool0_01     State                 300  OFFLINE
TSM0_mnt_pool0_01     State                 301  OFFLINE
TSM0_mnt_pool0_02     State                 300  OFFLINE
TSM0_mnt_pool0_02     State                 301  OFFLINE
TSM0_mnt_pool0_03     State                 300  OFFLINE
TSM0_mnt_pool0_03     State                 301  OFFLINE
TSM0_mnt_pool0_04     State                 300  OFFLINE
TSM0_mnt_pool0_04     State                 301  OFFLINE
TSM0_mnt_pool0_05     State                 300  OFFLINE
TSM0_mnt_pool0_05     State                 301  OFFLINE
TSM0_mnt_pool0_06     State                 300  OFFLINE
TSM0_mnt_pool0_06     State                 301  OFFLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 300  OFFLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# hares -online TSM0_mnt_instance -sys $( hostname -s )
[root@300 ~]# hares -online TSM0_ip_bond0     -sys $( hostname -s )
[root@300 ~]# hares -state | grep TSM0 | grep 301 | grep mnt | grep -v instance | awk '{print $1}' \
                | while read I; do hares -online ${I} -sys $( hostname -s ); done
[root@300 ~]# hares -state | grep 301 | grep TSM0
TSM0_dg               State                 301  ONLINE
TSM0_ip_bond0         State                 301  ONLINE
TSM0_mnt_active_log   State                 301  ONLINE
TSM0_mnt_archive_log  State                 301  ONLINE
TSM0_mnt_db_01        State                 301  ONLINE
TSM0_mnt_db_02        State                 301  ONLINE
TSM0_mnt_db_03        State                 301  ONLINE
TSM0_mnt_db_backup_01 State                 301  ONLINE
TSM0_mnt_db_backup_02 State                 301  ONLINE
TSM0_mnt_db_backup_03 State                 301  ONLINE
TSM0_mnt_instance     State                 301  ONLINE
TSM0_mnt_pool0_01     State                 301  ONLINE
TSM0_mnt_pool0_02     State                 301  ONLINE
TSM0_mnt_pool0_03     State                 301  ONLINE
TSM0_mnt_pool0_04     State                 301  ONLINE
TSM0_mnt_pool0_05     State                 301  ONLINE
TSM0_mnt_pool0_06     State                 301  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# find /tsm0 | grep -v 'lost+found'
/tsm0
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# chown -R tsm0:tsm0 /tsm0

IBM TSM Server Configuration

Connect to one of the nodes with SSH Forwarding enabled.

[root@300 ~]# cd /opt/tivoli/tsm/server/bin
[root@300 /opt/tivoli/tsm/server/bin]# ./dsmicfgx
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Options choosen during configuration.

INSTALL | Instance user ID:
INSTALL |  Β Β tsm0
INSTALL |
INSTALL | Instance directory:
INSTALL |  Β Β /tsm0
INSTALL |
INSTALL | Database directories:
INSTALL |  Β Β /tsm0/db/db_01
INSTALL |   Β /tsm0/db/db_02
INSTALL |   Β /tsm0/db/db_03
INSTALL |
INSTALL | Active log directory:
INSTALL |  Β Β /tsm0/active_log
INSTALL |
INSTALL | Primary archive log directory:
INSTALL |  Β Β /tsm0/archive_log
INSTALL |
INSTALL | Instance autostart setting:
INSTALL |  Β Β Start automatically using the instance user ID

Screenshots from the configuration process.

ibm-tsm-configure-01

ibm-tsm-configure-02

ibm-tsm-configure-03

ibm-tsm-configure-04

ibm-tsm-configure-05

ibm-tsm-configure-06

ibm-tsm-configure-07

ibm-tsm-configure-08

ibm-tsm-configure-09

Log from the IBM TSM DB2 instance creation.

Creating the database manager instance...
The database manager instance was created successfully.

Formatting the server database...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 5208.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server's database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown.

Format completed with return code 0
Beginning initial configuration...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 8741.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1636W The server machine GUID changed: old value (), new value (f0.8a.27.61-
.e5.43.b6.11.92.b5.00.0a.f7.49.31.18).
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server
password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2200I Storage pool BACKUPPOOL defined (device class DISK).
ANR2200I Storage pool ARCHIVEPOOL defined (device class DISK).
ANR2200I Storage pool SPACEMGPOOL defined (device class DISK).
ANR2560I Schedule manager started.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
ANR2094I Server name set to TSM0.
ANR4865W The server name has been changed. Windows clients that use "passworda-
ccess generate" may be unable to authenticate with the server.
ANR2068I Administrator ADMIN registered.
ANR2076I System privilege granted to administrator ADMIN.
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.

Configuration is complete.

Modify IBM TSM Server Startup Script

Modified startup script to properly work with Veritas Cluster Server with modification in blue below.

[root@300 ~]# cat /etc/init.d/tsm0
#!/bin/bash
#
# dsmserv       Start/Stop IBM Tivoli Storage Manager
#
# chkconfig: - 90 10
# description: Starts/Stops an IBM Tivoli Storage Manager Server instance
# processname: dsmserv
# pidfile: /var/run/dsmserv_instancename.pid

#***********************************************************************
# Distributed Storage Manager (ADSM)                                   *
# Server Component                                                     *
#                                                                      *
# IBM Confidential                                                     *
# (IBM Confidential-Restricted when combined with the Aggregated OCO   *
# Source Modules for this Program)                                     *
#                                                                      *
# OCO Source Materials                                                 *
#                                                                      *
# 5765-303 (C) Copyright IBM Corporation 1990, 2009                    *
#***********************************************************************

#
# This init script is designed to start a single Tivoli Storage Manager
# server instance on a system where multiple instances might be running.
# It assumes that the name of the script is also the name of the instance
# to be started (or, if the script name starts with Snn or Knn, where 'n'
# is a digit, that the name of the instance is the script name with the
# three letter prefix removed).
#
# To use the script to start multiple instances, install multiple copies
# of the script in /etc/rc.d/init.d, naming each copy after the instance
# it will start.
#
# The script makes a number of simplifying assumptions about the way
# the instance is set up.
# - The Tivoli Storage Manager Server instance runs as a non-root user whose
#   name is the instance name
# - The server's instance directory (the directory in which it keeps all of
#   its important state information) is in a subdirectory of the home
#   directory called tsminst1.
# If any of these assumptions are not valid, then the script will require
# some modifications to work.  To start with, look at the
# instance, instance_user, and instance_dir variables set below...

# First of all, check for syntax
if [[ $# != 1 ]]
then
  echo $"Usage: $0 {start|stop|status|restart}"
  exit 1
fi

prog="dsmserv"
instance=tsm0
serverBinDir="/opt/tivoli/tsm/server/bin"

if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system ($serverBinDir/$prog)"
   exit -1
fi

# see if $0 starts with Snn or Knn, where 'n' is a digit.  If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${instance:0:1} == S ]]
then
  instance=${instance#S[0123456789][0123456789]}
elif [[ ${instance:0:1} == K ]]
then
  instance=${instance#K[0123456789][0123456789]}
fi

instance_home=`${serverBinDir}/dsmfngr $instance 2>/dev/null`
if [[ -z "$instance_home" ]]
then
  instance_home="/home/${instance}"
fi
instance_user=tsm0
instance_dir=/tsm0
pidfile="/var/run/${prog}_${instance}.pid"

PATH=/sbin:/bin:/usr/bin:/usr/sbin:$serverBinDir

#
# Do some basic error checking before starting the server
#
# Is the server installed?
if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system"
   exit 0
fi

# Does the instance directory exist?
if [[ ! -d $instance_dir ]]
then
 echo "Instance directory ${instance_dir} does not exist"
 exit -1
fi
rc=0

SLEEP_INTERVAL=5
MAX_SLEEP_TIME=10

function check_pid_file()
{
    test -f $pidfile
}

function check_process()
{
    ps -p `cat $pidfile` > /dev/null
}

function check_running()
{
    check_pid_file && check_process
}

start() {
        # set the standard value for the user limits
        ulimit -c unlimited
        ulimit -d unlimited
        ulimit -f unlimited
        ulimit -n 65536
        ulimit -t unlimited
        ulimit -u 16384

        echo -n "Starting $prog instance $instance ... "
        #if we're already running, say so
        status 0
        if [[ $g_status == "running" ]]
        then
           echo "$prog instance $instance already running..."
           exit 0
        else
           $serverBinDir/rc.dsmserv -u $instance_user -i $instance_dir -q >/dev/null 2>&1 &
           # give enough time to server to start
           sleep 5
           # if the lock file got created, we did ok
           if [[ -f $instance_dir/dsmserv.v6lock ]]
           then
              gawk --source '{print $4}' $instance_dir/dsmserv.v6lock>$pidfile
              [ $? = 0 ] && echo "Succeeded" || echo "Failed"
              rc=$?
              echo
              [ $rc -eq 0 ] && touch /var/lock/subsys/${instance}
              return $rc
           else
              echo "Failed"
              return 1
           fi
       fi
}

stop() {
        echo  "Stopping $prog instance $instance ..."
        if [[ -e $pidfile ]]
        then
           # make sure someone else didn't kill us already
           progpid=`cat $pidfile`
           running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
           if [[ -n $running ]]
           then
              #echo "executing cmd kill `cat $pidfile`"
              kill `cat $pidfile`

              total_slept=0
              while check_running; do \
                  echo  "$prog instance $instance still running, will check after $SLEEP_INTERVAL seconds"
                  sleep $SLEEP_INTERVAL
                  total_slept=`expr $total_slept + 1`

                  if [ "$total_slept" -gt "$MAX_SLEEP_TIME" ]; then \
                      break
                  fi
              done

              if  check_running
              then
                echo "Unable to stop $prog instance $instance"
                exit 1
              else
                echo "$prog instance $instance stopped Successfully"
              fi
           fi
           # remove the pid file so that we don't try to kill same pid again
           rm $pidfile
           if [[ $? != 0 ]]
           then
              echo "Process $prog instance $instance stopped, but unable to remove $pidfile"
              echo "Be sure to remove $pidfile."
              exit 1
           fi
        else
           echo "$prog instance $instance is not running."
        fi
        rc=$?
        echo
        [ $rc -eq 0 ] && rm -f /var/lock/subsys/${instance}
        return $rc
}

status() {
      # check usage
      if [[ $# != 1 ]]
      then
         echo "$0: Invalid call to status routine. Expected argument: "
         echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
         exit 100
         # exit 1
      fi
      #see if file $pidfile exists
      # if it does, see if process is running
      # if it doesn't, it's not running - or at least was not started by dsmserv.rc
      if [[ -e $pidfile ]]
      then
         progpid=`cat $pidfile`
         running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
         if [[ -n $running ]]
         then
            g_status="running"
         else
            g_status="stopped"
            # remove the pidfile if stopped.
            if [[ -e $pidfile ]]
            then
                rm $pidfile
                if [[ $? != 0 ]]
                then
                    echo "$prog instance $instance stopped, but unable to remove $pidfile"
                    echo "Be sure to remove $pidfile."
                fi
            fi
         fi
      else
        g_status="stopped"
      fi
      if [[ $1 == 1 ]]
      then
            echo "Status of $prog instance $instance: $g_status"
      fi

      if [ "${1}" = "1" ]
      then
        case ${g_status} in
          (stopped) EXIT=100 ;;
          (running) EXIT=110 ;;
        esac
        exit ${EXIT}
      fi
}

restart() {
        stop
        start
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  status)
        status 1
        ;;
  restart|reload)
        restart
        ;;
  *)
        echo $"Usage: $0 {start|stop|status|restart}"
        exit 1
esac

exit $?

… and the diff(1) between original and modified one.

[root@300 ~]# diff -u /etc/init.d/tsm0 /root/tsm0
--- /etc/init.d/tsm0    2016-07-13 13:20:43.000000000 +0200
+++ /root/tsm0          2016-07-13 13:27:41.000000000 +0200
@@ -207,7 +207,8 @@
       then
          echo "$0: Invalid call to status routine. Expected argument: "
          echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
-         exit 1
+         exit 100
+         # exit 1
       fi
       #see if file $pidfile exists
       # if it does, see if process is running
@@ -239,6 +240,15 @@
       then
             echo "Status of $prog instance $instance: $g_status"
       fi
+
+      if [ "${1}" = "1" ]
+      then
+        case ${g_status} in
+          (stopped) EXIT=100 ;;
+          (running) EXIT=110 ;;
+        esac
+        exit ${EXIT}
+      fi
 }

 restart() {

Copy tsm0 Profile to the Other Node

[root@300 ~]# pwd
/home
[root@300 /home]# tar -czf - tsm0 | ssh 301 'tar -C /home -xzf -'
[root@300 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@301 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0

IBM TSM Server Start

[root@300 ~]# hares -online TSM0_ip_bond0         -sys 300
[root@300 ~]# hares -online TSM0_mnt_active_log   -sys 300
[root@300 ~]# hares -online TSM0_mnt_archive_log  -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_01        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_02        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_03        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_instance     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_01     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_02     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_03     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_04     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_05     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_06     -sys 300
[root@300 ~]# hares -state | grep TSM0 | grep 300
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@300 ~]# cat >> /etc/services << __EOF
DB2_tsm0        60000/tcp
DB2_tsm0_1      60001/tcp
DB2_tsm0_2      60002/tcp
DB2_tsm0_3      60003/tcp
DB2_tsm0_4      60004/tcp
DB2_tsm0_END    60005/tcp
__EOF
[root@300 ~]# hagrp -freeze TSM0_site
[root@300 ~]# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  300            RUNNING              0
A  301            RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  NSR_site        300            Y          N               OFFLINE
B  NSR_site        301            Y          N               ONLINE
B  RMAN_site       300            Y          N               OFFLINE
B  RMAN_site       301            Y          N               ONLINE
B  TSM0_site       300            Y          N               PARTIAL
B  TSM0_site       301            Y          N               OFFLINE
B  VCS_site        300            Y          N               OFFLINE
B  VCS_site        301            Y          N               ONLINE

-- GROUPS FROZEN
-- Group

C  TSM0_site

-- RESOURCES DISABLED
-- Group           Type            Resource

H  TSM0_site      Application     TSM0_server
H  TSM0_site      DiskGroup       TSM0_dg
H  TSM0_site      IP              TSM0_ip_bond0
H  TSM0_site      Mount           TSM0_mnt_active_log
H  TSM0_site      Mount           TSM0_mnt_archive_log
H  TSM0_site      Mount           TSM0_mnt_db_01
H  TSM0_site      Mount           TSM0_mnt_db_02
H  TSM0_site      Mount           TSM0_mnt_db_03
H  TSM0_site      Mount           TSM0_mnt_db_backup_01
H  TSM0_site      Mount           TSM0_mnt_db_backup_02
H  TSM0_site      Mount           TSM0_mnt_db_backup_03
H  TSM0_site      Mount           TSM0_mnt_instance
H  TSM0_site      Mount           TSM0_mnt_pool0_01
H  TSM0_site      Mount           TSM0_mnt_pool0_02
H  TSM0_site      Mount           TSM0_mnt_pool0_03
H  TSM0_site      Mount           TSM0_mnt_pool0_04
H  TSM0_site      Mount           TSM0_mnt_pool0_05
H  TSM0_site      Mount           TSM0_mnt_pool0_06
H  TSM0_site      NIC             TSM0_nic_bond0

[root@300 ~]# su - tsm0 -c '/opt/tivoli/tsm/server/bin/dsmserv -i /tsm0'
ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 9834.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message
catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1635I The server machine GUID, 54.80.e8.50.e4.48.e6.11.8e.6d.00.0a.f7.49.2b.08, has
initialized.
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2803I License manager started.
ANR8200I TCP/IP Version 4 driver ready for connection with clients on port 1500.
ANR9639W Unable to load Shared License File dsmreg.sl.
ANR9652I An EVALUATION LICENSE for IBM System Storage Archive Manager will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Basic Edition will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Extended Edition will expire on
08/13/2016.
ANR2828I Server is licensed to support IBM System Storage Archive Manager.
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.
ANR2560I Schedule manager started.
ANR0984I Process 1 for EXPIRE INVENTORY (Automatic) started in the BACKGROUND at 01:58:03 PM.
ANR0811I Inventory client file expiration started as process 1.
ANR0167I Inventory file expiration process 1 processed for 0 minutes.
ANR0812I Inventory file expiration process 1 completed: processed 0 nodes, examined 0 objects,
deleting 0 backup objects, 0 archive objects, 0 DB backup volumes, and 0 recovery plan files. 0
objects were retried and 0 errors were encountered.
ANR0985I Process 1 for EXPIRE INVENTORY (Automatic) running in the BACKGROUND completed with
completion state SUCCESS at 01:58:03 PM.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
TSM:TSM0>q admin
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY ADMIN

Administrator        Days Since       Days Since      Locked?       Privilege Classes
Name                Last Access     Password Set
--------------     ------------     ------------     ----------     -----------------------
ADMIN                        <1               <1         No         System
ADMIN_CENTER                 halt
ANR2017I Administrator SERVER_CONSOLE issued command: HALT
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.
ANR0991I Server shutdown complete.


[root@300 ~]# hagrp -unfreeze TSM0_site

[root@300 ~]# hares -state | grep TSM0 | grep 302
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@301 ~]# hares -online TSM0_server -sys 300

Ignore these errors below during first IBM TSM server startup.

IGNORE | ERRORS TO IGNORE DURING FIRST IBM TSM SERVER START
IGNORE | 
IGNORE | DBI1306N  The instance profile is not defined.
IGNORE |
IGNORE | Explanation:
IGNORE |
IGNORE | The instance is not defined in the target machine registry.
IGNORE |
IGNORE | User response:
IGNORE |
IGNORE | Specify an existing instance name or create the required instance.

Install IBM TSM Server Licenses

Screenshots from that process below.

ibm-tsm-install-license-01

ibm-tsm-install-license-02

ibm-tsm-install-license-03

ibm-tsm-install-license-04

Lets now register licenses for the IBM TSM.

tsm: TSM0_SITE>register license file=/opt/tivoli/tsm/server/bin/tsmee.lic
ANR2852I Current license information:
ANR2853I New license information:
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.

IBM TSM Client Configuration on the IBM TSM Server Nodes

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

Install lin_tape on IBM TSM Server

[root@ALL]# uname -r
2.6.32-504.el6.x86_64

[root@ALL]# uname -r | sed 's|.x86_64||g'
2.6.32-504.el6

[root@ALL]# yum --showduplicates list kernel-devel | grep 2.6.32-504.el6
kernel-devel.x86_64            2.6.32-504.el6                 rhel-6-server-rpms

[root@ALL]# yum install rpm-build kernel-devel-2.6.32-504.el6

[root@ALL]# rpm -Uvh /root/rpmbuild/RPMS/x86_64/lin_tape-3.0.10-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_tape               ########################################### [100%]
Starting lin_tape...
lin_tape loaded

[root@ALL]# rpm -Uvh lin_taped-3.0.10-rhel6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_taped              ########################################### [100%]
Starting lin_tape...
lin_taped loaded

[root@ALL]# /etc/init.d/lin_tape start
Starting lin_tape... lin_taped already running. Abort!

[root@ALL]# /etc/init.d/lin_tape restart
Shutting down lin_tape... lin_taped unloaded
Starting lin_tape...

Library Configuration

This is quite unusual configuration as the IBM TS3310 library with 4 LTO4 drives are logically partitioned into two logical libraries with 2 drives dedicated to Dell/EMC Networker and 2 drives dedicated to the IBM TSM server. Such library is shown below.

ibm-tsm-ts3310.jpg

The changers and tape drives for each backup system.

Networker | (L) 000001317577_LLA changer0
TSM       | (L) 000001317577_LLB changer1_persistent_TSM0
Networker | (1) 7310132058       tape0
Networker | (2) 7310295146       tape1
TSM       | (3) 7310214751       tape2_persistent_TSM0
TSM       | (4) 7310214904       tape3_persistent_TSM0
[root@300 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape3
/dev/IBMtape3n

We will use UDEV for persistent configuration.

[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape0)    | grep -i serial
    ATTR{serial_num}=="7310132058"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape1)    | grep -i serial
    ATTR{serial_num}=="7310295146"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape2)    | grep -i serial
    ATTR{serial_num}=="7310214751"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape3)    | grep -i serial
    ATTR{serial_num}=="7310214904"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger0) | grep -i serial
    ATTR{serial_num}=="000001317577_LLA"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger1) | grep -i serial
    ATTR{serial_num}=="000001317577_LLB"
[root@300 ~]# cat /proc/scsi/IBM*
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Changer Devices:
Number  model       SN                HBA             SCSI            FO Path
0       3576-MTL    000001317577_LLA  qla2xxx         2:0:1:1         NA
1       3576-MTL    000001317577_LLB  qla2xxx         4:0:1:1         NA
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Tape Devices:
Number  model       SN                HBA             SCSI            FO Path
0       ULT3580-TD4 7310132058        qla2xxx         2:0:0:0         NA
1       ULT3580-TD4 7310295146        qla2xxx         2:0:1:0         NA
2       ULT3580-TD4 7310214751        qla2xxx         4:0:0:0         NA
3       ULT3580-TD4 7310214904        qla2xxx         4:0:1:0         NA

[root@300 ~]# cat /etc/udev/rules.d/98-lin_tape.rules
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310132058", MODE="0660", SYMLINK="IBMtape0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310295146", MODE="0660", SYMLINK="IBMtape1"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214751", MODE="0660", SYMLINK="IBMtape2_persistent_TSM0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214904", MODE="0660", SYMLINK="IBMtape3_persistent_TSM0"
KERNEL=="IBMchanger*", ATTR{serial_num}=="000001317577_LLB", MODE="0660", SYMLINK="IBMchanger1_persistent_TSM0"

[root@301 ~]# /etc/init.d/lin_tape stop
Shutting down lin_tape... lin_taped unloaded

[root@301 ~]# rmmod lin_tape

[root@301 ~]# /etc/init.d/lin_tape start
Starting lin_tape...

New persistent devices.

[root@301 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMchanger1_persistent_TSM0
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape2_persistent_TSM0
/dev/IBMtape3
/dev/IBMtape3n
/dev/IBMtape3_persistent_TSM0

Lets update the paths to the tape drives now.

tsm: TSM0_SITE>query path f=d

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: TS3310
              Destination Type: LIBRARY
                       Library:
                     Node Name:
                        Device: /dev/IBMchanger0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): ADMIN
         Last Update Date/Time: 09/16/2014 13:36:14

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE0
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 14:02:02

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE1
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape1
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 13:59:48

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=no
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary device=/dev/IBMchanger1_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=yes
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=no
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape2_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE0           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE0 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape3_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.


Lets verify that our library works properly.

tsm: TSM0_SITE>audit library TS3310 checklabel=barcode
ANS8003I Process number 2 started.

tsm: TSM0_SITE>query proc

Process      Process Description      Process Status
  Number
--------     --------------------     -------------------------------------------------
       2     AUDIT LIBRARY            ANR8459I Auditing volume inventory for library
                                       TS3310.


tsm: TSM0_SITE>query act
(...)

08/04/2016 14:30:41      ANR2017I Administrator ADMIN issued command: AUDIT
                          LIBRARY TS3310 checklabel=barcode  (SESSION: 8)
08/04/2016 14:30:41      ANR0984I Process 2 for AUDIT LIBRARY started in the
                          BACKGROUND at 02:30:41 PM. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:41      ANR8457I AUDIT LIBRARY: Operation for library TS3310
                          started as process 2. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:46      ANR8358E Audit operation is required for library TS3310.
                          (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:51      ANR8439I SCSI library TS3310 is ready for operations.
                          (SESSION: 8, PROCESS: 2)

(...)

08/04/2016 14:31:26      ANR0985I Process 2 for AUDIT LIBRARY running in the
                          BACKGROUND completed with completion state SUCCESS at
                          02:31:26 PM. (SESSION: 8, PROCESS: 2)

(...)

IBM TSM Storage Pool Configuration

IBM TSM container storage pool creation.

tsm: TSM0_SITE>define stgpool POOL0_stgFC stgtype=directory
ANR2249I Storage pool POOL0_stgFC is defined.

tsm: TSM0_SITE>define stgpooldirectory POOL0_stgFC /tsm0/pool0/pool0_01,/tsm0/pool0/pool0_02,/tsm0/pool0/pool0_03,/tsm0/pool0/pool0_04,/tsm0/pool0/pool0_05,/tsm0/pool0/pool0_06
ANR3254I Storage pool directory /tsm0/pool0/pool0_01 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_02 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_03 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_04 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_05 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_06 was defined in storage pool POOL0_stgFC.

tsm: TSM0_SITE>q stgpooldirectory

Storage Pool Name     Directory                                         Access
-----------------     ---------------------------------------------     ------------
POOL0_stgFC           /tsm0/pool0/pool0_01                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_02                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_03                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_04                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_05                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_06                              Read/Write


IBM TSM Backup Policies Configuration

Below is an example policy.

tsm: TSM0_SITE>def dom  FS backret=30 archret=30
ANR1500I Policy domain FS defined.

tsm: TSM0_SITE>def pol  FS FS
ANR1510I Policy set FS defined in policy domain FS.

tsm: TSM0_SITE>def mg   FS FS FS_1DAY
ANR1520I Management class FS_1DAY defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1DAY   STANDARD type=backup destination=POOL0_STGFC verexists=32 verdeleted=1 retextra=31 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1DAY.

tsm: TSM0_SITE>def mg   FS FS FS_1MONTH
ANR1520I Management class FS_1MONTH defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1MONTH STANDARD type=backup destination=POOL0_STGFC  verexists=4 verdeleted=1 retextra=91 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1MONTH.

tsm: TSM0_SITE>as defmg FS FS FS_1DAY
ANR1538I Default management class set to FS_1DAY for policy domain FS, set FS.

tsm: TSM0_SITE>act pol  FS FS
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.

Do you wish to proceed? (Yes (Y)/No (N)) y
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.
ANR1514I Policy set FS activated in policy domain FS.



I hope that the amount of instructions did not discouraged you from one of the best enterprise backup systems – the IBM TSM (now IBM Spectrum Protect) and on of the best high availability cluster – the Veritas Cluster Server πŸ™‚

EOF

Syncthing on FreeBSD

This article will show you how to setup Syncthing on FreeBSD system.

syncthing-logo.png

One warning at the beginning – all > and < characters in the Syncthing configuration file were changed to } and { respectively. This is because of WordPress limitation. Remember that Syncthing config is XML file.

For most of my personal backup needs I always use rsync(1) but on the limited devices such as phones or tablets its real PITA. Thus for the automated import of the photos and other files from such devices I prefer to use Syncthing tool.

If you haven’t heard about it yet I will cite the Syncthing https://syncthing.net/ site. “Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it’s transmitted over the Internet.” … and Wikipedia “Syncthing is a free, open-source peer-to-peer file synchronization application available for Windows, Mac, Linux, Android, Solaris, Darwin, and BSD. It can sync files between devices on a local network, or between remote devices over the Internet. Data security and data safety are built into the design of the software.”

One may ask how its different from Nextcloud for example. Well, with Nextcloud you have almost ‘entire’ cloud stack with custom applications at your disposal. With Syncthing you have synchronization tool between devices and nothing more.

Initially I wanted – similarly like with Nextcloud on FreeBSD – to setup everything in a FreeBSD Jail. The problem is Syncthing does not work in a FreeBSD Jails virtualization as I figured out after several hours of trying to find out what is wrong. The management interface of Syncthing was working as expected and was accessible but the Syncthing on the Android mobile phone was not able to connect/sync with the Syncthing instance in the FreeBSD Jail. Sure I could connect to the Syncthing management interface from the phone but still could not do any backup using Syncthing protocol. Knowing this limitation you have 3 options to choose from:

  • Setup Syncthing on FreeBSD host like any other service.
  • Use FreeBSD Bhyve virtualization for Syncthing instance.
  • Use VirtualBox package/port for Syncthing instance.

I have chosen the first option. It is actually the same for Bhyve and VirtualBox but additional work is needed with virtualization layer. I will use Android based mobile phone as an example for the Syncthing client but you can sync data between computers as well.

One more thing, there is no such thing as Syncthing server and Syncthing client. All Syncthing instances/installations are the same, You can just add/remove devices and directories to synchronize between those devices. I used term ‘client’ above to show that I will be automating of copying the files from phone to FreeBSD server with Syncthing instance, nothing more.

Host

Here are some basic steps that I have done on the FreeBSD host. Things like aliases database, timezone, DNS and basic FreeBSD settings at its /etc/rc.conf core file.

# newaliases -v
/etc/mail/aliases: 29 aliases, longest 10 bytes, 297 bytes total

# ln -s /usr/share/zoneinfo/Europe/Warsaw /etc/localtime

# date
Fri Aug 17 22:05:18 CEST 2018

# echo nameserver 1.1.1.1 > /etc/resolv.conf

# ping -c 3 freebsd.org
PING freebsd.org (96.47.72.84): 56 data bytes
64 bytes from 96.47.72.84: icmp_seq=0 ttl=51 time=117.918 ms
64 bytes from 96.47.72.84: icmp_seq=1 ttl=51 time=115.169 ms
64 bytes from 96.47.72.84: icmp_seq=2 ttl=51 time=115.392 ms

--- freebsd.org ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 115.169/116.160/117.918/1.247 ms

… and the main FreeBSD configuration file.

# cat /etc/rc.conf
# NETWORK
  hostname=blackbox.local
  ifconfig_re0="inet 10.0.0.100/24 up"
  defaultrouter="10.0.0.1"

# DAEMONS | YES
  zfs_enable=YES
  sshd_enable=YES
  ntpd_enable=YES
  syncthing_enable=YES
  syslogd_flags="-s -s"

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  dumpdev=NO
  update_motd=NO
  virecover_enable=NO
  clear_tmp_enable=YES

Install

First we will switch from quarterly to the latest pkg(8) branch to get the most up to date packages.

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",

# sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

We will now bootstrap pkg(8) and then update its database to latest available one.

# env ASSUME_ALWAYS_YES=yes pkg update -f
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[syncthing.local] Installing pkg-1.10.5_1...
[syncthing.local] Extracting pkg-1.10.5_1: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[syncthing.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[syncthing.local] Fetching packagesite.txz: 100%    6 MiB 352.7kB/s    00:19    
Processing entries: 100%
FreeBSD repository update completed. 32388 packages processed.
All repositories are up to date.

… and then install the Syncthing from pkg(8) packages.

# pkg install -y syncthing 
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        syncthing: 0.14.48

Number of packages to be installed: 1

The process will require 88 MiB more space.
15 MiB to be downloaded.
[1/1] Fetching syncthing-0.14.48.txz: 100%   15 MiB 525.3kB/s    00:29    
Checking integrity... done (0 conflicting)
[1/1] Installing syncthing-0.14.48...
===> Creating groups.
Creating group 'syncthing' with gid '983'.
===> Creating users
Creating user 'syncthing' with uid '983'.
[1/1] Extracting syncthing-0.14.48: 100%
Message from syncthing-0.14.48:

WARNING: This version is not backwards compatible with 0.13.x, 0.12.x, 0.11.x
nor 0.10.x releases!

For more information, please read:

https://forum.syncthing.net/t/syncthing-v0-14-0/7806
https://github.com/syncthing/syncthing/releases/tag/v0.13.0
https://forum.syncthing.net/t/syncthing-v0-11-0-release-notes/2426
https://forum.syncthing.net/t/syncthing-syncthing-v0-12-0-beryllium-bedbug/6026

The Syncthing package created a syncthing user and group for us.

# id syncthing
uid=983(syncthing) gid=983(syncthing) groups=983(syncthing)

Look how small the Syncthing is, these are all files installed by the net/syncthing package.

# pkg info -l syncthing
syncthing-0.14.48:
        /usr/local/bin/stbench
        /usr/local/bin/stcli
        /usr/local/bin/stcompdirs
        /usr/local/bin/stdisco
        /usr/local/bin/stdiscosrv
        /usr/local/bin/stevents
        /usr/local/bin/stfileinfo
        /usr/local/bin/stfinddevice
        /usr/local/bin/stgenfiles
        /usr/local/bin/stindex
        /usr/local/bin/strelaypoolsrv
        /usr/local/bin/strelaysrv
        /usr/local/bin/stsigtool
        /usr/local/bin/sttestutil
        /usr/local/bin/stvanity
        /usr/local/bin/stwatchfile
        /usr/local/bin/syncthing
        /usr/local/etc/rc.d/syncthing
        /usr/local/etc/rc.d/syncthing-discosrv
        /usr/local/etc/rc.d/syncthing-relaypoolsrv
        /usr/local/etc/rc.d/syncthing-relaysrv
        /usr/local/share/doc/syncthing/AUTHORS
        /usr/local/share/doc/syncthing/LICENSE
        /usr/local/share/doc/syncthing/README.md

Configuration

As shows above we already have syncthing_enable=YES added to the /etc/rc.conf file.

# /usr/local/etc/rc.d/syncthing rcvar
# syncthing
#
syncthing_enable="NO"
#   (default: "")

# grep syncthing_enable /etc/rc.conf
  syncthing_enable=YES

Also from the Syncthing rc(8) startup script you may check other startup options.

# less -N /usr/local/etc/rc.d/syncthing
(...)
      9 # Add the following lines to /etc/rc.conf.local or /etc/rc.conf
     10 # to enable this service:
     11 #
     12 # syncthing_enable (bool):      Set to NO by default.
     13 #                               Set it to YES to enable syncthing.
     14 # syncthing_home (path):        Directory where syncthing configuration
     15 #                               data is stored.
     16 #                               Default: /usr/local/etc/syncthing
     17 # syncthing_log_file (path):    Syncthing log file
     18 #                               Default: /var/log/syncthing.log
     19 # syncthing_user (user):        Set user to run syncthing.
     20 #                               Default is "syncthing".
     21 # syncthing_group (group):      Set group to run syncthing.
     22 #                               Default is "syncthing".
(...)

The Syncthing needs /var/log/syncthing.log log file. Lets then create it and set proper owner and rights for it.

# ls /var/log/syncthing.log
ls: /var/log/syncthing.log: No such file or directory

# :> /var/log/syncthing.log

# chown syncthing:syncthing /var/log/syncthing.log

# ls -l /var/log/syncthing.log
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:06 /var/log/syncthing.log

As we will be using this log file we also need to take care of its rotation, we will use builtin FreeBSD newsyslog(8) daemon for that purpose.

# cat > /etc/newsyslog.conf.d/syncthing << __EOF
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC
__EOF

# cat /etc/newsyslog.conf.d/syncthing
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC

# newsyslog -v | grep syncthing
Processing /etc/newsyslog.conf.d/syncthing
/var/log/syncthing.log : size (Kb): 0 [100] --> skipping

Lets try to start Syncthing for the first time.

# service syncthing start
Starting syncthing.
daemon: pidfile ``/var/run/syncthing.pid'': Permission denied
/usr/local/etc/rc.d/syncthing: WARNING: failed to start syncthing

Seems that rc(8) Syncthing startup does not create PID file automatically, lets create it then.

 
# :> /var/run/syncthing.pid

# chown syncthing:syncthing /var/run/syncthing.pid

# ls -l /var/run/syncthing.pid
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:08 /var/run/syncthing.pid

Now lets try to start Syncthing again.

# service syncthing start
Starting syncthing.

Better. Lets see what ports does it use.

# sockstat -l -4 | grep syncthing
syncthing syncthing 27499 9  tcp46  *:22000               *:*
syncthing syncthing 27499 10 udp4   *:18876               *:*
syncthing syncthing 27499 13 udp4   *:21027               *:*
syncthing syncthing 27499 20 tcp4   127.0.0.1:8384        *:*

… and check its log file.

# cat /var/log/syncthing.log
[start] 01:08:40 INFO: Generating ECDSA key and certificate for syncthing...
[MPN4S] 01:08:40 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:08:40 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:08:41 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:08:41 INFO: Default folder created and/or linked to new config
[MPN4S] 01:08:41 INFO: Default config saved. Edit /usr/local/etc/syncthing/config.xml to taste or use the GUI
[MPN4S] 01:08:42 INFO: Hashing performance is 112.85 MB/s
[MPN4S] 01:08:42 INFO: Updating database schema version from 0 to 2...
[MPN4S] 01:08:42 INFO: Updated symlink type for 0 index entries and added 0 invalid files to global list
[MPN4S] 01:08:42 INFO: Finished updating database schema version from 0 to 2
[MPN4S] 01:08:42 INFO: No stored folder metadata for "default": recalculating
[MPN4S] 01:08:42 WARNING: Creating directory for "Default Folder" (default): mkdir /Sync/: permission denied
[MPN4S] 01:08:42 WARNING: Creating folder marker: folder path missing
[MPN4S] 01:08:42 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:08:42 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:08:42 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v4.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v6.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery.syncthing.net/v2/?noannounce&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:08:42 INFO: Relay listener (dynamic+https://relays.syncthing.net/endpoint) starting
[MPN4S] 01:08:42 WARNING: Error on folder "Default Folder" (default): folder path missing
[MPN4S] 01:08:42 INFO: Failed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:08:42 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:08:42 INFO: Loading HTTPS certificate: open /usr/local/etc/syncthing/https-cert.pem: no such file or directory
[MPN4S] 01:08:42 INFO: Creating new HTTPS certificate
[MPN4S] 01:08:42 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:08:42 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/
[MPN4S] 01:08:55 INFO: Joined relay relay://11.12.13.14:443
[MPN4S] 01:09:02 INFO: Detected 1 NAT service

We have several WARNING messages here about default /Sync directory. Lets fix those.

# service syncthing stop
Stopping syncthing.
Waiting for PIDS: 27498.

Upon first Syncthing start the rc(8) startup script created the /usr/local/etc/syncthing directory with its configuration.

# find /usr/local/etc/syncthing
/usr/local/etc/syncthing
/usr/local/etc/syncthing/https-cert.pem
/usr/local/etc/syncthing/https-key.pem
/usr/local/etc/syncthing/cert.pem
/usr/local/etc/syncthing/key.pem
/usr/local/etc/syncthing/config.xml
/usr/local/etc/syncthing/index-v0.14.0.db
/usr/local/etc/syncthing/index-v0.14.0.db/MANIFEST-000000
/usr/local/etc/syncthing/index-v0.14.0.db/LOCK
/usr/local/etc/syncthing/index-v0.14.0.db/000001.log
/usr/local/etc/syncthing/index-v0.14.0.db/LOG
/usr/local/etc/syncthing/index-v0.14.0.db/CURRENT

Now lets get back to fixing the WARNING for the /Sync directory.

# grep '/Sync' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="//Sync" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

# ls /Sync
ls: /Sync: No such file or directory

Now lets create dedicated directory for our Syncthing instance and set it also in the /usr/local/etc/syncthing/config.xml config file.

# mkdir /syncthing

# chown syncthing:syncthing /syncthing

# chmod 750 /syncthing

# vi /usr/local/etc/syncthing/config.xml

# grep '/syncthing' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="/syncthing" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

We will also disable Relay and Global Announce Server but we will left Local Announce Server enabled.

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}true{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# vi /usr/local/etc/syncthing/config.xml

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}false{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}true{/globalAnnounceEnabled}

# vi /usr/local/etc/syncthing/config.xml

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}false{/globalAnnounceEnabled}

Before restarting Syncthing lets clean the /var/log/syncthing.log file to eliminate now unneeded information.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

Lets check what the log holds for us now.

# cat /var/log/syncthing.log
[MPN4S] 01:13:38 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:13:38 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:13:39 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:13:40 INFO: Hashing performance is 112.97 MB/s
[MPN4S] 01:13:40 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:13:40 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:13:40 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:13:40 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:13:40 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:13:40 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:13:40 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:13:40 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/

We can see that the management interface listens on HTTP not HTTPS because tls option is set to false. We will also need to switch the management interface address from localhost (127.0.0.1) to our IP address (10.0.0.100).

# grep -B 1 -A 3 127.0.0.1 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="false" debugging="false"}
        {address}127.0.0.1:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

# vi /usr/local/etc/syncthing/config.xml

# grep -B 1 -A 3 10.0.0.100 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="true" debugging="false"}
        {address}10.0.0.100:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

Lets verify our changes now.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

# cat /var/log/syncthing.log
[MPN4S] 01:16:20 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:16:20 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:16:21 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:16:22 INFO: Hashing performance is 113.07 MB/s
[MPN4S] 01:16:22 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:16:22 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:16:22 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:16:22 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:16:22 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:16:22 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:16:22 INFO: GUI and API listening on 10.0.0.100:8384
[MPN4S] 01:16:22 INFO: Access the GUI via the following URL: https://10.0.0.100:8384/
[MPN4S] 01:16:42 INFO: Detected 1 NAT service

The log is now ‘clean’ and we can continue to the browser at the https://10.0.0.100:8384 management interface for the rest of Syncthing configuration. The browser will of course warn us about untrusted HTTPS certificate.

syncthing-01.png

Syncthing will ask us if we agree upon sharing of statistics data. I leave that choice to you.

syncthing-02.png

The Syncthing dashboard welcomes us with big red warning about remote administration being allowed without a password. We will fix that in a moment, click the Settings button in that warning.

syncthing-03

Leave first General tab will unmodified.

syncthing-04.png

On the GUI tab we will create user admin with SYNCTHINGPASSWORD password for the Syncthing management interface. Use something more sensible here πŸ™‚

syncthing-05.png

I did not modified settings at the Connections tab. Click Save to continue.

syncthing-06.png

Besides setting the user and its password I haven’t changed/set any other options.

We now has Syncthing without errors. You will be prompted for that user and password in a moment. We will now remove Default Folder as its not needed. Hit its Edit button.

syncthing-07.png

Then click the Remove button on the bottom.

syncthing-08.png

… and click Yes for confirmation.

syncthing-09.png

The ’empty’ Syncthing dashboard.

syncthing-10.png

Next we will download, install and configure Syncthing on the Android phone. Depending on your preferences use F-Droid repository or Google Play repository … or just an APK file from the source of your choice. The installed Syncthing application is shown below. Takes about 50 MB.

syncthing-11

Lets start it then, you will see the Welcome message from the Syncthing application.

syncthing-12

Depending on your Android version your phone may ask you to allow Syncthing for various permissions. Agree.

syncthing-13

Same as earlier the Syncthing will ask you if you agree for sharing of the statistics data. I also leave that choice to you.

syncthing-14

The Syncthing will now require restart, tap RESTART NOW to continue.

syncthing-15

By default the Camera directory is preconfigured pointing at /storage/emulated/0/DCIM directory which holds photos and screenshots taken on the phone. Its enough for me so I will use it. Tap the Syncthing hamburger menu button.

syncthing-19

… and select Web GUI option.

syncthing-20

You will see management interface for Syncthing on your Android phone, scroll below to add blackbox.local Syncthing instance from the FreeBSD in the Remote Devices section.

syncthing-21

Now in the Remote Devices section hit the Add Remote Device button.

syncthing-22

Remember that Local Announce service we left enabled? This is when it comes handy. You will have our Syncthing instance ID from FreeBSD displayed as it was automatically detected on the network.

syncthing-23

Click on the displayed ID and enter the blackbox.local hostname.

Besides entering (clicking) ID and hostname I did not set any other options. Click Save.

syncthing-24

The blackbox.local will be added to the Remote Devices list.

syncthing-25

Below are the Camera directory properties. Remember to select blackbox.local as the allowed host (small yellow slider).

syncthing-26

… and the blackbox.local device properties.

syncthing-27

Now let’s get back to the FreeBSD’s Syncthing instance management interface on the browser. You will be prompted to add Syncthing of the Android phone – SM-A320FL in my case – to the devices. Hit green Add Device button.

syncthing-28.png

Click Save without adding other options.

syncthing-29.png

The SM-A320FL device for our Android phone is now visible in the Remote Devices section.

syncthing-30.png

You should now be prompted that SM-A320FL device wants to share Camera directory. Hit green Add button.

syncthing-31.png

Enter SM-A320FL as the folder label and /syncthing/SM-A320FL as the directory name on the FreeBSD Syncthing instance. Also make sure that SM-A320FL is selected in the Share With Devices section on the bottom.

syncthing-32.png

The SM-A320FL device and SM-A320FL folder from this device are now configured. You will first see Out of Sync message for the SM-A320FL folder. The synchronization should now start whose progress can be observed both on the phone and in the management interface of the FreeBSD Syncthing instance in the browser.

syncthing-33.png

The SM-A320FL folder switched status to Syncing with progress.

syncthing-34.png

You will see similar status on the Android phone.

syncthing-36

After some file you will see that SM-A320FL folder has status Up to Date. That means that all files from the Camera directory are synchronized to the FreeBSD Syncthing instance.

syncthing-35

The created/synced directories from the Android phone looks as follows on the FreeBSD Syncthing instance.

# find /syncthing -type d
/syncthing
/syncthing/SM-A320FL
/syncthing/SM-A320FL/Camera
/syncthing/SM-A320FL/Camera/.AutoPortrait
/syncthing/SM-A320FL/Screenshots
/syncthing/SM-A320FL/.thumbnails
/syncthing/SM-A320FL/.stfolder

Now you have your Camera files synced as backup.

The complete Syncthing config from the FreeBSD instance is available /usr/local/etc/syncthing/config.xml here. After download rename it from *.xml.key to *.xml file (WordPress limitation).

UPDATE 1

The Syncthing on FreeBSD article was featured in the BSD Now 262 – OpenBSD Surfacing episode.

Thanks for mentioning!

EOF

Silent Fanless FreeBSD Desktop/Server

Today I will write about silent fanless FreeBSD desktop or server computer … or NAS … or you name it, it can have multiple purposes. It also very low power solution, which also means that it will not overheat. Silent means no fans at all, even for the PSU. The format of the system should also be brought to minimum, so Mini-ITX seems best solution here.

I also made two follow ups to this article:

I have chosen Intel based solutions as they are very low power (6-10W), if you prefer AMD (as I often do) the closest solution in comparable price and power is Biostar A68N-2100 motherboard with AMD E1-2100 CPU and 9W power. Of course AMD has even more low power SoC solutions but finding the Mini-ITX motherboard with decent price is not an easy task. For comparision Intel has lots of such solutions below 6W whose can be nicely filtered on the ark.intel.com page. Pity that AMD does not provide such filtration for their products. I also chosen AES instructions as storage encryption (GELI on FreeBSD) today seems as obvious as HTTPS for the web pages.

Here is how the system look powered up and working.

itx-mobo

This motherboard uses Intel J3355 SoC which uses 10W and has AES instructions. It has two cores at your disposal but it also supports VT-x and EPT extensions so you can even run Bhyve on it.

Components

Now, an example system would look like that one below, here are the components with their prices.

  $49  CPU/Motherboard ASRock J3355B-ITX Mini-ITX
  $14  RAM Crucial 4 GB DDR3L 1.35V (low power)
  $17  PSU 12V 160W Pico (internal)
  $11  PSU 12V 96W FSP (external)
   $5  USB 2.0 Drive 16 GB ADATA
   $4  USB Wireless 802.11n
 $100  TOTAL

The PSU 12V 160W Pico (internal) and PSU 12V 96W FSP can be purchased on aliexpress.com or ebay.com for example, at least I got them there.

Here is the 12V 160W Pico (internal) PSU and its optional additional cables to power the optional HDDs. If course its one SATA power and one MOLEX power so additional MOLEX-SATA power adapter for about 1$ would be needed.

itx-psu-int

itx-psu-int-cables

Here is the 12V 96W FSP (external) PSU without the power cord.

itx-psu-ext

itx-psu-ext-close

This is still without a case, I currently have SilverStone SG05 (today I would probably buy something else) which cost me about $100 but there are cheaper solutions with similar features. If you would like to use two 2.5 drives for even low power and noise as ZFS mirror, then the Inter-Tech ITX-601 case seems far more better as it also comes with the silent PSU.

The SilverStone SG05 case with ‘loud’ PSU.

itx-35-case-silverstone

The Inter-Tech ITX-601 case with silent PSU.

itx-25-case-front

itx-25-case-top

With the Inter-Tech ITX-601 case the components and their prices would look like that.

  $49  CPU/Motherboard ASRock J3355B-ITX Mini-ITX
  $14  RAM Crucial 4 GB DDR3L 1.35V (low power)
  $50  CASE Inter-Tech ITX-601 (comes with PSU)
   $5  USB 2.0 Drive 16 GB ADATA C008
   $4  USB Wireless 802.11n
 $122  TOTAL

Of course if ‘you wanna go pro‘ there are great cases such as Supermicro 721TQ-250B which is also used by FreeNAS Mini appliance and SilverStone CS01-HS with disks loaded from top, but they both cost $160 without the PSU.

The Supermicro 721TQ-250B case.

itx-case-pro-SM

The SilverStone CS01-HS case.

itx-case-pro-SS

The RAM vendor is not important here, the more important is to get the low power DDR3 memory – the DDR3L as it takes less power.

The boring RAM stick itself.

itx-ram

I have used USB 2.0 Drive 16 GB ADATA C008 for system drive but if you are going to buy one, the I would get USB 2.0 Drive Sandisk Cruzer Fit 16 GB as it barely gets out of the port or even two of them for the ZFS mirror for the system if its critical.

The Sandisk Cruzer Fit flash.

itx-usb-sandisk-cruzer-fit.jpg

I also used tiny USB WiFi stick which is the size of Sandisk Cruzer Fit.

itx-usb-wifi

Costs

This gives as total silent fanless system price of about $120. Its about ONE TENTH OF THE COST of the cheapest FreeNAS hardware solution available – the FreeNAS Mini (Diskless) costs $1156 also without disks.

FreeBSD

I have tried FreeBSD 12.0-CURRENT r331740 on this box, but the upcoming FreeBSD 11.2-RELEASE (currently at RC1 stage) would do as much well. Below is the dmesg(8) console output of system boot on this machine.

Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
	The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 12.0-CURRENT #0 r331740: Thu Mar 29 21:24:24 UTC 2018
    root@releng3.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
FreeBSD clang version 6.0.0 (tags/RELEASE_600/final 326565) (based on LLVM 6.0.0)
WARNING: WITNESS option enabled, expect reduced performance.
VT(efifb): resolution 1024x768
CPU: Intel(R) Celeron(R) CPU J3355 @ 2.00GHz (1996.89-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x506c9  Family=0x6  Model=0x5c  Stepping=9
  Features=0xbfebfbff
  Features2=0x4ff8ebbf
  AMD Features=0x2c100800
  AMD Features2=0x101
  Structured Extended Features=0x2294e283
  XSAVE Features=0xf
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
  TSC: P-state invariant, performance statistics
real memory  = 4294967296 (4096 MB)
avail memory = 3696037888 (3524 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: 
WARNING: L1 data cache covers fewer APIC IDs than a core (0 < 1)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
random: unblocking device.
ioapic0  irqs 0-119 on motherboard
SMP: AP CPU #1 Launched!
Timecounter "TSC" frequency 1996886000 Hz quality 1000
random: entropy device external interface
netmap: loaded module
[ath_hal] loaded
module_register_init: MOD_LOAD (vesa, 0xffffffff81034600, 0) error 19
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
kbd1 at kbdmux0
nexus0
cryptosoft0:  on motherboard
acpi0:  on motherboard
unknown: I/O range not supported
cpu0:  on acpi0
cpu1:  on acpi0
attimer0:  port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0:  port 0x70-0x77 on acpi0
atrtc0: Warning: Couldn't map I/O.
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
hpet0:  iomem 0xfed00000-0xfed003ff irq 8 on acpi0
Timecounter "HPET" frequency 19200000 Hz quality 950
Event timer "HPET" frequency 19200000 Hz quality 550
Event timer "HPET1" frequency 19200000 Hz quality 440
Event timer "HPET2" frequency 19200000 Hz quality 440
Event timer "HPET3" frequency 19200000 Hz quality 440
Event timer "HPET4" frequency 19200000 Hz quality 440
Event timer "HPET5" frequency 19200000 Hz quality 440
Event timer "HPET6" frequency 19200000 Hz quality 440
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0:  port 0x408-0x40b on acpi0
pcib0:  port 0xcf8-0xcff on acpi0
pci0:  on pcib0
vgapci0:  port 0xf000-0xf03f mem 0x90000000-0x90ffffff,0x80000000-0x8fffffff irq 19 at device 2.0 on pci0
vgapci0: Boot video device
hdac0:  mem 0x91210000-0x91213fff,0x91000000-0x910fffff irq 25 at device 14.0 on pci0
pci0:  at device 15.0 (no driver attached)
ahci0:  port 0xf090-0xf097,0xf080-0xf083,0xf060-0xf07f mem 0x91214000-0x91215fff,0x91218000-0x912180ff,0x91217000-0x912177ff irq 19 at device 18.0 on pci0
ahci0: AHCI v1.31 with 2 6Gbps ports, Port Multiplier supported
ahcich0:  at channel 0 on ahci0
ahcich1:  at channel 1 on ahci0
pcib1:  irq 22 at device 19.0 on pci0
pci1:  on pcib1
pcib2:  irq 20 at device 19.2 on pci0
pci2:  on pcib2
re0:  port 0xe000-0xe0ff mem 0x91104000-0x91104fff,0x91100000-0x91103fff irq 20 at device 0.0 on pci2
re0: Using 1 MSI-X message
re0: ASPM disabled
re0: Chip rev. 0x4c000000
re0: MAC rev. 0x00000000
miibus0:  on re0
rgephy0:  PHY 1 on miibus0
rgephy0:  none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow
re0: Using defaults for TSO: 65518/35/2048
re0: Ethernet address: 70:85:c2:xx:xx:xx
re0: netmap queues/slots: TX 1/256, RX 1/256
xhci0:  mem 0x91200000-0x9120ffff irq 17 at device 21.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
isab0:  at device 31.0 on pci0
isa0:  on isab0
acpi_button0:  on acpi0
acpi_tz0:  on acpi0
ppc1:  port 0x378-0x37f,0x778-0x77f irq 5 drq 3 on acpi0
ppc1: SMC-like chipset (ECP/EPP/PS2/NIBBLE) in COMPATIBLE mode
ppc1: FIFO with 16/16/9 bytes threshold
ppbus0:  on ppc1
lpt0:  on ppbus0
lpt0: Interrupt-driven port
ppi0:  on ppbus0
uart0:  port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart1:  port 0x2f8-0x2ff irq 3 on acpi0
atkbdc0:  at port 0x60,0x64 on isa0
atkbd0:  irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbdc0: non-PNP ISA device will be removed from GENERIC in FreeBSD 12.
est0:  on cpu0
est1:  on cpu1
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
hdacc0:  at cad 0 on hdac0
hdaa0:  at nid 1 on hdacc0
pcm0:  at nid 21 and 24,26 on hdaa0
pcm1:  at nid 20 and 25 on hdaa0
pcm2:  at nid 27 on hdaa0
hdacc1:  at cad 2 on hdac0
hdaa1:  at nid 1 on hdacc1
pcm3:  at nid 3 on hdaa1
ugen0.1:  at usbus0
uhub0:  on usbus0
Trying to mount root from zfs:zroot/ROOT/default []...
Root mount waiting for: usbus0
WARNING: WITNESS option enabled, expect reduced performance.
uhub0: 15 ports with 15 removable, self powered
Root mount waiting for: usbus0
Root mount waiting for: usbus0
ugen0.2:  at usbus0
umass0 on uhub0
umass0:  on usbus0
umass0:  SCSI over Bulk-Only; quirks = 0x8100
umass0:2:0: Attached to scbus2
da0 at umass-sim0 bus 0 scbus2 target 0 lun 0
da0:  Removable Direct Access SPC-2 SCSI device
da0: Serial Number 27A2100480550067
da0: 40.000MB/s transfers
da0: 14800MB (30310400 512 byte sectors)
da0: quirks=0x2
Root mount waiting for: usbus0
ugen0.3:  at usbus0
Root mount waiting for: usbus0
ugen0.4:  at usbus0
uhub1 on uhub0
uhub1:  on usbus0
Root mount waiting for: usbus0
Root mount waiting for: usbus0
uhub1: 4 ports with 3 removable, bus powered
ugen0.5:  at usbus0
ukbd0 on uhub1
ukbd0:  on usbus0
kbd2 at ukbd0
re0: link state changed to DOWN
rtwn0 on uhub0
rtwn0:  on usbus0
rtwn0: MAC/BB RTL8188CUS, RF 6052 1T1R

I haven’t tried the HDMI output but VGA output both in console and X11 worked properly, same for sound, onboard NIC and rest of the provided interfaces. To connect to the Internet and fetch packages I used tiny USB WiFi stick based on the RTL8188CUS chip, also worked very good, here are details from the console about the USB WiFi stick from dmesg(8).

ugen0.5:  at usbus0
rtwn0 on uhub0
rtwn0:  on usbus0
rtwn0: MAC/BB RTL8188CUS, RF 6052 1T1R

Storage

If it will gonna serve as NAS when what storage should You attach to it? Depends on how much storage space you need, if You can fit in 5 TB (which is quite a lot anyway) You can still use that Inter-Tech ITX-601 case as Seagate provides 5 TB 2.5 drives with BarraCuda ST5000LM000 model.

I currently use two 4 TB 3.5 drives as they are cheaper then the 2.5 drives, but that of course requires bigger case and more power and also makes more noise.

To keep the system totally silent You would of course have to use SSD drives for the storage, but that would be very expensive. For example getting two 1 TB 2.5 SSD drives to mirror them would cost you about $400. For the same price you could get two 5 TB 2.5 HDD drives. ONE FIFTH OF THE COST comparing to SSD drives. Or two 8 TB 3.5 HDD drives. ONE EIGHTH OF THE COST comparing to SSD drives. As you can see total silence comes at a price πŸ˜‰

Expansion

As these motherboard come with PCI-Express slot you can expand the features even more, for example with 10 GE card or additional SATA controller. When I used the older solution I used that slot for the USB 3.0 ports card extension.

These kinds of motherboards often come with internal Mini PCI-Express ports which are ideal for wireless devices or SSD drives.

System

You can put plain FreeBSD on top of it or Solaris/Illumos distribution OmniOSce which is server oriented. You can use prebuilt NAS solution based on FreeBSD like FreeNAS, NAS4Free, ZFSguru or even Solaris/Illumos based storage with napp-it appliance.

You can of course stick with one SSD or USB flash for the system and use it as a desktop with install like in the FreeBSD Desktop – Part 2 – Install article, but in that case I would suggest getting even smaller case then the ones described here.

With WiFi card that supports Host AP mode (most Atheros devices) You can also turn it into a safe wireless access point on a HardenedBSD system, or even OpenBSD.

UPDATE 1 – Motherboard with ECC RAM Support

As Bill Bog mentioned in the comments below that such kind of setup does not offer ECC memory and I agree with him that its better to have ECC then to not have it so I add this update with information on how to achieve still cheap and silent fanless setup.

The ASRock C2550D4I comes with help and ECC memory support and its not THAT expensive as you can get it new for about $290. It comes with quad-core Intel Atom C2550 CPU and uses only 14W of power which is not bad considering that it can support up to 64 GB of ECC RAM and has 12 (!) SATA ports. It also covers all important features as AES instructions and VT-x and EPT extensions for Bhyve support. It still provides PCI-Express x8 slot and even remote management with IPMI. And last but not least it has two 1 GE LAN ports.

Here is how it looks.

itx-mobo-ecc-C2550.jpg

As ECC RAM is usually more expensive then the regular one the used ECC RAM stick needed for such setup is very cheap, without any extra effort I was able to find used Samsung DDR3L 4GB 1333 ECC REG. PC3L-10600R memory stick for about $10.

The less boring ECC RAM stick.

itx-mobo-ecc-ram

The example complete ECC setup would look like that.

 $290  CPU/Motherboard ASRock C2550D4I
  $10  RAM Samsung 4 GB DDR3L 1.35V ECC REG
  $50  CASE Inter-Tech ITX-601 (comes with PSU)
  $10  2 x Sandisk Cruzer Fit 16 GB
   $4  USB Wireless 802.11n
 $364  TOTAL

Still QUARTER OF THE COST comparing to the FreeNAS Mini (Diskless) appliance and we will have two Sandisk Cruzer Fit 16 GB drives to put system in a ZFS mirror as we already use ECC memory for increased data security.

UPDATE 2

The Silent Fanless FreeBSD Desktop/Server article was featured in the BSD Now 253 – Silence of the Fans episode.

Thanks for mentioning!

UPDATE 3

Seems that I indirectly created $50 discount on http://SilentPC.com machines πŸ™‚

The Silent Fanless FreeBSD Desktop/Server was featured in BSD Now 253 – Silence of the Fans. Peter from SilentPC wrote here http://dpaste.com/2N6DC6P and you can also see it talked through in the latest BSD Now 262 – OpenBSD Surfacing episode from 1:03:27 to 1:04:37 that if you mention BSD Now in the comments at checkout they will get you a $50 discount on a system.

EOF

Bareos Backup Server on FreeBSD

Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula?

bareos-logo

If you are interested in more enterprise backup solution then check IBM TSM (Spectrum Protect) on Veritas Cluster Server article.

Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages.

I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group).

bareos-sched-levels-256.png

This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that.

The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result.

bareos-sched-cross-256.png

Not bad for my taste.

Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine:

  • bareos-dir
  • bareos-sd
  • bareos-webui
  • bareos-fd

I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares.

To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers.

Also this diagram may be useful for You to get some grip into the Bareos world.

bareos-overview-small

System

As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.

freebsd-nakatomi.jpg

Sorry couldn’t resist πŸ™‚

Here are 3 most important configuration files after some time in vi(1) with them.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
# postgresql_enable=YES
# postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

As You can see all ‘core’ services for Bareos are currently disabled on purpose. We will enable them later.

Parameters and modules to be set at boot.

root@replica:~ # cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=2
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# MODULES
  zfs_load=YES

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

Parameters to be set at runtime.

root@replica:~ # cat /etc/sysctl.conf
# SECURITY
  security.bsd.see_other_uids=0
  security.bsd.see_other_gids=0
  security.bsd.unprivileged_read_msgbuf=0
  security.bsd.unprivileged_proc_debug=0
  security.bsd.stack_guard_page=1
  kern.randompid=9100

# ZFS
  vfs.zfs.min_auto_ashift=12

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# IPC
  kern.ipc.shmall=524288
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

After install we will disable the /zroot mounting.

root@replica:/ # zfs set mountpoint=none zroot

As we have sendmail(8) disabled we will need to take care of its queue.

root@replica:~ # cat > /etc/cron.d/sendmail-clean-clientmqueue << __EOF
# CLEAN SENDMAIL
0 * * * * root /bin/rm -r -f /var/spool/clientmqueue/*
__EOF

Assuming the NFS servers configured in the /etc/hosts file the ‘complete’ /etc/hosts file would look like that.

root@replica:~ # grep '^[^#]' /etc/hosts
::1        localhost localhost.my.domain
127.0.0.1  localhost localhost.my.domain
10.0.10.40 replica.backup.org replica
10.0.10.50 nfs-pri.backup.org nfs-pri
10.0.20.50 nfs-sec.backup.org nfs-sec

Lets verify outside world connectivity – needed for adding the Bareos packages.

root@replica:~ # nc -v bareos.org 443
Connection to bareos.org 443 port [tcp/https] succeeded!
^C
root@replica:~ #

Packages

As we want the latest packages we will modify the /etc/pkg/FreeBSD.conf – the pkg(8) repository file for the latest packages.

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

root@replica:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We will use Bareos packages from pkg(8) as they are available, no need to waste time and power on compilation.

root@replica:~ # pkg search bareos
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
(...)
bareos-bat-16.2.7              Backup archiving recovery open sourced (GUI)
bareos-client-16.2.7           Backup archiving recovery open sourced (client)
bareos-client-static-16.2.7    Backup archiving recovery open sourced (static client)
bareos-docs-16.2.7             Bareos document set (PDF)
bareos-server-16.2.7           Backup archiving recovery open sourced (server)
bareos-traymonitor-16.2.7      Backup archiving recovery open sourced (traymonitor)
bareos-webui-16.2.7            PHP-Frontend to manage Bareos over the web

Now we will install Bareos along with all needed components for its environment.

root@replica:~ # pkg install \
  bareos-client bareos-server bareos-webui postgresql95-server nginx \
  php56 php56-xml php56-session php56-simplexml php56-gd php56-ctype \
  php56-mbstring php56-zlib php56-tokenizer php56-iconv php56-mcrypt \
  php56-pear-DB_ldap php56-zip php56-dom php56-sqlite3 php56-gettext \
  php56-curl php56-json php56-opcache php56-wddx php56-hash php56-soap

The bareos, pgsql and www users have been added by pkg(8) along with their packages.

root@replica:~ # id bareos
uid=997(bareos) gid=997(bareos) groups=997(bareos)

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql)

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www)

PostgreSQL

First we will setup the PostgreSQL database.

We will add separate pgsql login class for PostgreSQL database user.

root@replica:~ # cat >> /etc/login.conf << __EOF
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

__EOF

This is one of the rare occasions when I would appreciate the -p flag from the AIX grep command to display whole paragraph πŸ˜‰

root@replica:~ # grep -B 1 -A 3 pgsql /etc/login.conf
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

Lets reload the login database.

root@replica:~ # cap_mkdb /etc/login.conf

Here are PostgreSQL rc(8) startup script ‘options’ that can be set in /etc/rc.conf file.

root@replica:~ # grep '#  postgresql' /usr/local/etc/rc.d/postgresql
#  postgresql_enable="YES"
#  postgresql_data="/usr/local/pgsql/data"
#  postgresql_flags="-w -s -m fast"
#  postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"
#  postgresql_class="default"
#  postgresql_profiles=""

We only need postgresql_enable and postgresql_class to be set.

We will enable them now in the /etc/rc.conf file.

root@replica:~ # grep -A 10 BAREOS /etc/rc.conf
# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

We will now init the PostgreSQL database for Bareos.

root@replica:~ # /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

… and start it.

root@replica:~ # /usr/local/etc/rc.d/postgresql start
LOG:  ending log output to stderr
HINT:  Future log output will go to log destination "syslog".

We will now take care of the Bareos server configuration. There are a lot *.sample files that we do not need. We also need to take care about permissions.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'
root@replica:~ # find /usr/local/etc/bareos -name \*\.sample -delete

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

For the ‘trace’ of our changes we will keep a copy of the original configuration to track what we have changed in the process of configuring our Bareos environment.

root@replica:~ # cp -a /usr/local/etc/bareos /usr/local/etc/bareos.ORG

Now, we would configure the Bareos Catalog in the /usr/local/etc/bareos.ORG/bareos-dir.d/catalog/MyCatalog.conf file, here are its contents after our modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Lets make sure that pgsql and www users are in the bareos group, to read its configuration files.

root@replica:~ # pw groupmod bareos -m pgsql

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql),997(bareos)

root@replica:~ # pw groupmod bareos -m www

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www),997(bareos)

Now, we will prepare the PostgreSQL database for out Bareos instance. We will use scripts provided by the Bareos package from the /usr/local/lib/bareos/scripts path.

root@replica:~ # su - pgsql

$ whoami
pgsql

$ /usr/local/lib/bareos/scripts/create_bareos_database
Creating postgresql database
CREATE DATABASE
ALTER DATABASE
Database encoding OK
Creating of bareos database succeeded.

$ /usr/local/lib/bareos/scripts/make_bareos_tables
Making postgresql tables
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
DELETE 0
INSERT 0 1
Creation of Bareos PostgreSQL tables succeeded.

$ /usr/local/lib/bareos/scripts/grant_bareos_privileges
Granting postgresql tables
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
Privileges for user bareos granted ON database bareos.

We can now verify that we have the needed database created.

root@replica:~ # su -m bareos -c 'psql -l'
                             List of databases
   Name    | Owner | Encoding  | Collate |    Ctype    | Access privileges 
-----------+-------+-----------+---------+-------------+-------------------
 bareos    | pgsql | SQL_ASCII | C       | C           | 
 postgres  | pgsql | UTF8      | C       | en_US.UTF-8 | 
 template0 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
 template1 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
(4 rows)

We will also add housekeeping script for PostgreSQL database and put it into crontab(1).

root@replica:~ # su - pgsql

$ whoami
pgsql

$ cat > /usr/local/pgsql/vacuum.sh  /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null
__EOF

$ chmod +x /usr/local/pgsql/vacuum.sh

$ cat /usr/local/pgsql/vacuum.sh
#! /bin/sh

/usr/local/bin/vacuumdb -a -z 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null

$ crontab -e

$ exit

root@replica:~ # cat /var/cron/tabs/pgsql
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.Be9j9VVCUa installed on Thu Apr 26 21:45:04 2018)
# (Cron version -- $FreeBSD$)
0 0 * * * /usr/local/pgsql/vacuum.sh

root@replica:~ # su -m pgsql -c 'crontab -l'
0 0 * * * /usr/local/pgsql/vacuum.sh

Storage

I assume that the primary storage would be mounted in the /bareos directory from one NFS server while Disaster Recovery site would be mounted as /bareos-dr from another NFS server. Below is example NFS configuration of these mount points.

root@replica:~ # mkdir /bareos /bareos-dr

root@replica:~ # mount -t nfs
nfs-pri.backup.org:/export/bareos on /bareos (nfs, noatime)
nfs-sec.backup.org:/export/bareos-dr on /bareos-dr (nfs, noatime)

root@replica:~ # cat >> /etc/fstab << __EOF
#DEV                                  #MNT        #FS  #OPTS                                                         #DP
nfs-pri.backup.org:/export/bareos     /bareos     nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
nfs-sec.backup.org:/export/bareos-dr  /bareos-dr  nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
__EOF

root@replica:~ # mkdir -p /bareos/bootstrap
root@replica:~ # mkdir -p /bareos/restore
root@replica:~ # mkdir -p /bareos/storage/FileStorage

root@replica:~ # mkdir -p /bareos-dr/bootstrap
root@replica:~ # mkdir -p /bareos-dr/restore
root@replica:~ # mkdir -p /bareos-dr/storage/FileStorage

root@replica:~ # chown -R bareos:bareos /bareos /bareos-dr

root@replica:~ # find /bareos /bareos-dr -ls | column -t
69194  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:42  /bareos
72239  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/restore
72240  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:42  /bareos/storage
72241  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/storage/FileStorage
72238  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/bootstrap
69195  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:43  /bareos-dr
72254  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:43  /bareos-dr/storage
72255  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:43  /bareos-dr/storage/FileStorage
72253  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/restore
72252  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/bootstrap

Bareos

As we already used BAREOS-DATABASE-PASSWORD for the bareos user on PostgreSQL’s Bareos database we will use these passwords for the remaining parts of the Bareos subsystems. I think that these passwords are self explaining for what Bareos components they are πŸ™‚

  • BAREOS-DATABASE-PASSWORD
  • BAREOS-DIR-PASSWORD
  • BAREOS-SD-PASSWORD
  • BAREOS-FD-PASSWORD
  • BAREOS-MON-PASSWORD
  • ADMIN-PASSWORD

We will now configure all these Bareos subsystems.

We already modified the MyCatalog.conf file, here are its contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Contents of the /usr/local/etc/bareos/bconsole.d/bconsole.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bconsole.d/bconsole.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = replica.backup.org
  address = localhost
  Password = "BAREOS-DIR-PASSWORD"
  Description = "Bareos Console credentials for local Director"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  QueryFile = "/usr/local/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 100
  Password = "BAREOS-DIR-PASSWORD"
  Messages = Daemon
  Auditing = yes

  # Enable the Heartbeat if you experience connection losses
  # (eg. because of your router or firewall configuration).
  # Additionally the Heartbeat can be enabled in bareos-sd and bareos-fd.
  #
  # Heartbeat Interval = 1 min

  # remove comment in next line to load dynamic backends from specified directory
  # Backend Directory = /usr/local/lib

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all director plugins (*-dir.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf
Job {
  Name = "RestoreFiles"
  Description = "Standard Restore."
  Type = Restore
  Client = Default
  FileSet = "SelfTest"
  Storage = File
  Pool = BR-MO
  Messages = Standard
  Where = /bareos/restore
  Accurate = yes
}

New /usr/local/etc/bareos/bareos-dir.d/client/Default.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/Default.conf
Client {
  Name = Default
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

New /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf
Client {
  Name = replica.backup.org
  Description = "Client resource of the Director itself."
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/BackupCatalog.conf
Job {
  Name = "BackupCatalog"
  Description = "Backup the catalog database (after the nightly save)"
  JobDefs = "DefaultJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"

  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl 
  RunBeforeJob = "/usr/local/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"

  # This deletes the copy of the catalog
  RunAfterJob  = "/usr/local/lib/bareos/scripts/delete_catalog_backup"

  # This sends the bootstrap via mail for disaster recovery.
  # Should be sent to another system, please change recipient accordingly
  Write Bootstrap = "|/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \" -s \"Bootstrap for Job %j\" root@localhost" # (#01)
  Priority = 11                   # run after main backup
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Standard.conf
Messages {
  Name = Standard
  Description = "Reasonable message delivery -- send most everything to email address and to the console."
  operatorcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: Intervention needed for %j\" %r"
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: %t %e of %c %l\" %r"
  operator = root@localhost = mount                                 # (#03)
  mail = root@localhost = all, !skipped, !saved, !audit             # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Daemon.conf
Messages {
  Name = Daemon
  Description = "Message delivery for daemon messages (no job)."
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos daemon message\" %r"
  mail = root@localhost = all, !skipped, !audit # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !audit
  append = "/var/log/bareos/bareos-audit.log" = audit
}

Pools

By default Bareos comes with four pools configured, we would not use them so we will delete their configuration files.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/pool
total 14
-rw-rw----  1 bareos  bareos  536 Apr 16 08:14 Differential.conf
-rw-rw----  1 bareos  bareos  512 Apr 16 08:14 Full.conf
-rw-rw----  1 bareos  bareos  534 Apr 16 08:14 Incremental.conf
-rw-rw----  1 bareos  bareos   48 Apr 16 08:14 Scratch.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/pool/*.conf

We will now create two our pools for the DAILY backups and for the MONTHLY backups.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-DAILY-POOL.conf
Pool {
  Name = BR-DA
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 7 days           # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-DA-"             # Volumes will be labeled "BR-DA-"
}

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-MONTHLY-POOL.conf
Pool {
  Name = BR-MO
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 120 days         # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-MO-"             # Volumes will be labeled "BR-MO-"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Differential
  Client = Default
  FileSet = "SelfTest"
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Pool = BR-DA
  Priority = 10
  Write Bootstrap = "/bareos/bootstrap/%c.bsr"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/storage/File.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/storage/File.conf
Storage {
  Name = File
  Address = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Device = FileStorage
  Media Type = File
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf file after modifications.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf
Console {
  Name = bareos-mon
  Description = "Restricted console used by tray-monitor to get the status of the director."
  Password = "BAREOS-MON-PASSWORD"
  CommandACL = status, .status
  JobACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf
FileSet {
  Name = "Catalog"
  Description = "Backup the catalog dump and Bareos configuration files."
  Include {
    Options {
      signature = MD5
      Compression = lzo
    }
    File = "/var/db/bareos"
    File = "/usr/local/etc/bareos"
  }
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf
FileSet {
  Name = "SelfTest"
  Description = "fileset just to backup some files for selftest"
  Include {
    Options {
      Signature   = MD5
      Compression = lzo
    }
    File = "/usr/local/sbin"
  }
}

We do not need bundled LinuxAll.conf and WindowsAllDrives.conf filesets so we will delete them.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/fileset/
total 18
-rw-rw----  1 bareos  bareos  250 Apr 27 02:25 Catalog.conf
-rw-rw----  1 bareos  bareos  765 Apr 16 08:14 LinuxAll.conf
-rw-rw----  1 bareos  bareos  210 Apr 27 02:27 SelfTest.conf
-rw-rw----  1 bareos  bareos  362 Apr 16 08:14 WindowsAllDrives.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/LinuxAll.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/WindowsAllDrives.conf

We will now define two new filesets Windows.conf and UNIX.conf files.

New /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf
FileSet {
  Name = Windows
  Enable VSS = yes
  Include {
    Options {
      Signature = MD5
      Drive Type = fixed
      IgnoreCase = yes
      WildFile = "[A-Z]:/pagefile.sys"
      WildDir  = "[A-Z]:/RECYCLER"
      WildDir  = "[A-Z]:/$RECYCLE.BIN"
      WildDir  = "[A-Z]:/System Volume Information"
      Exclude = yes
      Compression = lzo
    }
    File = /
  }
}

New /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf
FileSet {
  Name = "UNIX"
  Include {
    Options {
      Signature = MD5 # calculate md5 checksum per file
      One FS = No     # change into other filessytems
      FS Type = ufs
      FS Type = btrfs
      FS Type = ext2  # filesystems of given types will be backed up
      FS Type = ext3  # others will be ignored
      FS Type = ext4
      FS Type = reiserfs
      FS Type = jfs
      FS Type = xfs
      FS Type = zfs
      noatime = yes
      Compression = lzo
    }
    File = /
  }
  # Things that usually have to be excluded
  # You have to exclude /tmp
  # on your bareos server
  Exclude {
    File = /var/db/bareos
    File = /tmp
    File = /proc
    File = /sys
    File = /var/tmp
    File = /.journal
    File = /.fsck
  }
}

File below is left unchanged.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/profile/operator.conf
Profile {
   Name = operator
   Description = "Profile allowing normal Bareos operations."

   Command ACL = !.bvfs_clear_cache, !.exit, !.sql
   Command ACL = !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount
   Command ACL = *all*

   Catalog ACL = *all*
   Client ACL = *all*
   FileSet ACL = *all*
   Job ACL = *all*
   Plugin Options ACL = *all*
   Pool ACL = *all*
   Schedule ACL = *all*
   Storage ACL = *all*
   Where ACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all
  Description = "Send all messages to the Director."
}

We will add /bareos/storage/FileStorage path as out FileStorage place for backups.

Contents of the /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf
Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /bareos/storage/FileStorage
  LabelMedia = yes;                   # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf
Storage {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-SD-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Description = "Director, who is permitted to contact this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all, !skipped, !restored
  Description = "Send relevant messages to the Director."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
  Description = "Allow the configured Director to access this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-MON-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/client/myself.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/client/myself.conf
Client {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-fd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""

  # if compatible is set to yes, we are compatible with bacula
  # if set to no, new bareos features are enabled which is the default
  # compatible = yes
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf
Client {
  Name = bareos-fd
  Description = "Client resource of the Director itself."
  Address = localhost
  Password = "BAREOS-FD-PASSWORD"
}

Lets see which files and Bareos components hold which passwords.

root@replica:~ # cd /usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # pwd
/usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # grep -r Password . | sort -k 4 | column -t
./bareos-dir.d/director/bareos-dir.conf:        Password  =  "BAREOS-DIR-PASSWORD"
./bconsole.d/bconsole.conf:                     Password  =  "BAREOS-DIR-PASSWORD"
./bareos-dir.d/client/Default.conf:             Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/bareos-fd.conf:           Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/replica.backup.org.conf:  Password  =  "BAREOS-FD-PASSWORD"
./bareos-fd.d/director/bareos-dir.conf:         Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/console/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-fd.d/director/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-dir.d/storage/File.conf:               Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-dir.conf:         Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-mon.conf:         Password  =  "BAREOS-SD-PASSWORD"

Lets fix the rights after creating all new files.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Bareos WebUI

Now we will add/configure files for the Bareos WebUI interface.

The main Nginx webserver configuration file.

root@replica:~ # cat /usr/local/etc/nginx/nginx.conf
user                 www;
worker_processes     4;
worker_rlimit_nofile 51200;
error_log            /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include           mime.types;
  default_type      application/octet-stream;
  log_format        main '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log        /var/log/nginx/access.log main;
  sendfile          on;
  keepalive_timeout 65;

  server {
    listen       9100;
    server_name  replica.backup.org bareos;
    root         /usr/local/www/bareos-webui/public;

    location / {
      index index.php;
      try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ .php$ {
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_param APPLICATION_ENV production;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
      try_files $uri =404;
    }
  }
}

For the PHP we will modify the bundled config file from package /usr/local/etc/php.ini-production file.

root@replica:~ # cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

root@replica:~ # vi /usr/local/etc/php.ini

We only add the timezone, for my location it is the Europe/Warsaw location.

root@replica:~ # diff -u php.ini-production php.ini
--- php.ini-production  2017-08-12 03:23:36.000000000 +0200
+++ php.ini     2017-09-12 18:50:40.513138000 +0200
@@ -934,6 +934,7 @@
 ; Defines the default timezone used by the date functions
 ; http://php.net/date.timezone
-;date.timezone =
+date.timezone = Europe/Warsaw

 ; http://php.net/date.default-latitude
 ;date.default_latitude = 31.7667

Here is the PHP php-fpm daemon configuration.

root@replica:~ # cat /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
log_level = notice

[www]
user = www
group = www
listen = 127.0.0.1:9000
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = static
pm.max_children = 4
pm.start_servers = 1
pm.min_spare_servers = 0
pm.max_spare_servers = 4
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Rest of the Bareos WebUI configuration.

New /usr/local/etc/bareos/bareos-dir.d/console/admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
Console {
  Name = admin
  Password = ADMIN-PASSWORD
  Profile = webui-admin
}

New /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf
Profile {
  Name = webui-admin
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
  Plugin Options ACL = *all*
}

You may add other directors here as well.

Modified /usr/local/etc/bareos-webui/directors.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/directors.ini
;------------------------------------------------------------------------------
; Section localhost-dir
;------------------------------------------------------------------------------
[replica.backup.org]
enabled = "yes"
diraddress = "replica.backup.org"
dirport = 9101
catalog = "MyCatalog"

Modified /usr/local/etc/bareos-webui/configuration.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/configuration.ini
;------------------------------------------------------------------------------
; SESSION SETTINGS
;------------------------------------------------------------------------------
[session]
timeout=3600

;------------------------------------------------------------------------------
; DASHBOARD SETTINGS
;------------------------------------------------------------------------------
[dashboard]
autorefresh_interval=60000

;------------------------------------------------------------------------------
; TABLE SETTINGS
;------------------------------------------------------------------------------
[tables]
pagination_values=10,25,50,100
pagination_default_value=25
save_previous_state=false

;------------------------------------------------------------------------------
; VARIOUS SETTINGS
;------------------------------------------------------------------------------
[autochanger]
labelpooltype=scratch

Last but not least, we need to set permissions for Bareos WebUI configuration files.

root@replica:~ # chown -R www:www /usr/local/etc/bareos-webui
root@replica:~ # chown -R www:www /usr/local/www/bareos-webui

Logs

Lets create the needed log files and fix their permissions.

root@replica:~ # chown -R bareos:bareos /var/log/bareos
root@replica:~ # :>               /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/nginx

We will now add rules to the newsyslog(8) log rotate daemon, we do not want our filesystem to fill up don’t we?

As newsyslog does cover the *.conf.d directories we will use them instead of modifying the main /etc/newsyslog.conf configuration file.

root@replica:~ # grep conf\\.d /etc/newsyslog.conf
 /etc/newsyslog.conf.d/*
 /usr/local/etc/newsyslog.conf.d/*

root@replica:~ # mkdir -p /usr/local/etc/newsyslog.conf.d

root@replica:~ # cat > /usr/local/etc/newsyslog.conf.d/bareos << __EOF
# BAREOS
/var/log/php-fpm.log             www:www       640  7     100    @T00  J
/var/log/nginx/access.log        www:www       640  7     100    @T00  J
/var/log/nginx/error.log         www:www       640  7     100    @T00  J
/var/log/bareos/bareos.log       bareos:bareos 640  7     100    @T00  J
/var/log/bareos/bareos-audit.log bareos:bareos 640  7     100    @T00  J
__EOF

Lets verify that newsyslog(8) understands out configuration.

root@replica:~ # newsyslog -v | tail -5
/var/log/php-fpm.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/access.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/error.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos-audit.log : --> will trim at Tue May  1 00:00:00 2018

Skel

We now need to create so called Bareos skel files for the rc(8) script to gather all the configuration in one file.

If we do not do that the Bareos services would not stop and we will see an error like that one below.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd onestart
Starting bareos_sd.
27-Apr 02:59 bareos-sd JobId 0: Error: parse_conf.c:580 Failed to read config file "/usr/local/etc/bareos/bareos-sd.conf"
bareos-sd ERROR TERMINATION
parse_conf.c:148 Failed to find config filename.
/usr/local/etc/rc.d/bareos-sd: WARNING: failed to start bareos_sd

Lets create them then …

root@replica:~ # cat > /usr/local/etc/bareos/bareos-dir.conf << __EOF
 @/usr/local/etc/bareos/bareos-dir.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-fd.conf << __EOF
@/usr/local/etc/bareos/bareos-fd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-sd.conf << __EOF
@/usr/local/etc/bareos/bareos-sd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bconsole.conf << __EOF
@/usr/local/etc/bareos/bconsole.d/*
__EOF

… and verify their contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.conf
@/usr/local/etc/bareos/bareos-dir.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.conf
@/usr/local/etc/bareos/bareos-fd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.conf
@/usr/local/etc/bareos/bareos-sd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bconsole.conf
@/usr/local/etc/bareos/bconsole.d/*

After all our modification and added files lefs make sure that /usr/local/etc/bareos dir permissions are properly set.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Its Alive!

Back to our system settings, we will add service start to the main FreeBSD /etc/rc.conf file.

After the modifications our final /etc/rc.conf file will look as follows.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
  bareos_dir_enable=YES
  bareos_sd_enable=YES
  bareos_fd_enable=YES
  php_fpm_enable=YES
  nginx_enable=YES

As PostgreSQL server is already running …

root@replica:~ 	# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 15205)
/usr/local/bin/postgres "-D" "/usr/local/pgsql/data"

… we will now start rest of our Bareos stack services.

First the PHP php-fpm daemon.

root@replica:~ # /usr/local/etc/rc.d/php-fpm start
Performing sanity check on php-fpm configuration:
[27-Apr-2018 02:57:09] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

The Nginx webserver.

root@replica:~ # /usr/local/etc/rc.d/nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Bareos Storage Daemon.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd start
Starting bareos_sd.

Bareos File Daemon also known as Bareos client.

root@replica:~ # /usr/local/etc/rc.d/bareos-fd start
Starting bareos_fd.

… and last but least, the most important daemon of this guide, the Bareos Director.

root@replica:~ # /usr/local/etc/rc.d/bareos-dir start
Starting bareos_dir.

We may now see on what ports our daemons are listening.

root@replica:~ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
bareos   bareos-dir 89823 4  tcp4   *:9101                *:*
root     bareos-fd  73066 3  tcp4   *:9102                *:*
www      nginx      33857 6  tcp4   *:9100                *:*
www      nginx      28675 6  tcp4   *:9100                *:*
www      nginx      20960 6  tcp4   *:9100                *:*
www      nginx      15881 6  tcp4   *:9100                *:*
root     nginx      14388 6  tcp4   *:9100                *:*
www      php-fpm    84047 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    82285 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    80688 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    74735 0  tcp4   127.0.0.1:9000        *:*
root     php-fpm    70518 8  tcp4   127.0.0.1:9000        *:*
bareos   bareos-sd  5151  3  tcp4   *:9103                *:*
pgsql    postgres   20009 4  tcp4   127.0.0.1:5432        *:*
root     sshd       49253 4  tcp4   *:22                  *:*

In case You wandered in what order these services will start, below is the answer from rc(8) subsystem.

root@replica:~ # rcorder /etc/rc.d/* /usr/local/etc/rc.d/* | grep -E '(bareos|php-fpm|nginx|postgresql)'
/usr/local/etc/rc.d/postgresql
/usr/local/etc/rc.d/php-fpm
/usr/local/etc/rc.d/nginx
/usr/local/etc/rc.d/bareos-sd
/usr/local/etc/rc.d/bareos-fd
/usr/local/etc/rc.d/bareos-dir

We can now access http://replica.backup.org:9100 in our browser.

bareos-webui-01

Its indeed alive, we can now login with admin user and ADMIN-PASSWORD password.

bareos-webui-02-dashboard

As we logged in we see empty Bareos dashboard.

Jobs

Now, to make life easier I have prepared two scripts for adding clients to the Bareos server.

The BRONZE-job.sh and BRONZE-sched.sh for generate Bareos files for new jobs and schedules. We will put them into /root/bin dir for convenience.

root@replica:~ # mkdir /root/bin

Both scripts are available below:

After downloading them please rename them accordingly (WordPress limitation).

root@replica:~ # mv BRONZE-sched.sh.key BRONZE-sched.sh
root@replica:~ # mv BRONZE-job.sh.key   BRONZE-job.sh

Lets make them executable.

root@replica:~ # chmod +x /root/bin/BRONZE-sched.sh
root@replica:~ # chmod +x /root/bin/BRONZE-job.sh

Below is ‘help’ message for each of them.

root@replica:~ # /root/bin/BRONZE-sched.sh 
usage: BRONZE-sched.sh GROUP TIME

example:
  BRONZE-sched.sh 01 21:00
root@replica:~ # /root/bin/BRONZE-job.sh
usage: BRONZE-job.sh GROUP TIME CLIENT TYPE

  GROUP option: 01 | 02 | 03
   TIME option: 00:00 - 23:59
 CLIENT option: FQDN
   TYPE option: UNIX | Windows

example:
  BRONZE-job.sh 01 21:00 CLIENT.domain.com UNIX

Client

For the first client we will use the replica.backup.org client – the server itself.

First use the BRONZE-sched.sh to create new scheduler configuration. The script will echo names of the files it created.

root@replica:~ # /root/bin/BRONZE-sched.sh 01 21:00
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-DAILY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-MONTHLY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

We will not use Windows backups for that client in that schedule so we can remove them.

root@replica:~ # rm -f \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

Then use the BRONZE-job.sh to add client and its type to created earlier schedule. Names of the created files will also be echoed to stdout.

root@replica:~ # /root/bin/BRONZE-job.sh 01 21:00 replica.backup.org UNIX
INFO: client DNS check.
INFO: DNS 'A' RECORD: Host replica.backup.org not found: 3(NXDOMAIN)
INFO: DNS 'PTR' RECORD: Host 3\(NXDOMAIN\) not found: 3(NXDOMAIN)
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-DAILY-01-2100-replica.backup.org.conf
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-MONTHLY-01-2100-replica.backup.org.conf

Now we need to reload the Bareos server configuration.

root@replica:~ # echo reload | bconsole
Connecting to Director localhost:9101
1000 OK: replica.backup.org Version: 16.2.7 (09 October 2017)
Enter a period to cancel a command.
reload
reloaded

Lets see how it looks in the browser. We will run that job, then cancel it and then rerun it again.

bareos-webui-03-clients

Client replica.backup.org is configured.

Lets go to Jobs tab to start its backup Job.

bareos-webui-04-jobs

Message that backup Job has started.

bareos-webui-05

We can see it in running state on Jobs tab.

bareos-webui-06

… and on the Dashboard.

bareos-webui-07

We can also display its messages by clicking on its number.

bareos-webui-08

The Jobs tab after cancelling the first Job and starting it again till completion.

bareos-webui-09

… and the Dashboard after these activities.

bareos-webui-10-dashboard

Restore

Lets restore some data, in Bareos its a breeze as its accessed directly in the browser on the Restore tab.

bareos-webui-11-restore

The Restore Job has started.

bareos-webui-12

The Dashboard after restoration.

bareos-webui-13-dashboard

… and Volumes with our precious data.

bareos-webui-14-volumes

Contents of a Volume.

bareos-webui-15-volumes-backups

Status of our Bareos Director.

bareos-webui-16

… and Director Messages, an equivalent of query actlog from IBM TSM or as they call it recently – IBM Spectrum Protect.

bareos-webui-17-messages

… and Bareos Console (bconsole) directly in the browser. Masterpiece!

bareos-webui-18-console

Confirmation about the restored file.

root@replica:~ # ls -l /tmp/bareos-restores/COPYRIGHT 
-r--r--r--  1 root  wheel  6199 Jul 21  2017 /tmp/bareos-restores/COPYRIGHT

root@replica:~ # sha256 /tmp/bareos-restores/COPYRIGHT /COPYRIGHT | column -t
SHA256  (/tmp/bareos-restores/COPYRIGHT)  =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0
SHA256  (/COPYRIGHT)                      =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0

Remote Replica

We have volumes with backup in the /bareos directory, we will now configure rsync(1) to replicate these backups to the /bareos-dr directory, to NFS server in other location.

root@replica:~ # pkg install rsync

The rsync(1) command will look like that.


/usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

We will put that command into the crontab(1) root job.

root@replica:~ # crontab -e

root@replica:~ # crontab -l
0 7 * * * /usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

As all backups have finished before 7:00, the end of backup window, we will start replication by then.

Summary

So we have a configured ready to make backups and restore Bareos Backup Server on a FreeBSD operating system. It can be used as an Appliance on any virtualization platform or also on a physical server with local storage resources without NFS shares.

UPDATE 1 – Die Hard Tribute in 9.2-RC3 Loader

The FreeBSD Developers even made a tribute to the Die Hard movie and actually implemented the Nakatomi Socrates screen in the FreeBSD 9.2-RC3 loader as shown on the images below. Unfortunately it has been removed in later FreeBSD 9.2-RC4 and official FreeBSD 9.2-RELEASE versions.

freebsd-9.2-nakatomi-socrates-01

freebsd-9.2-nakatomi-socrates-02

UPDATE 2

The Bareos Backup Server on FreeBSD article was featured in the BSD Now 254 – Bare the OS episode.

Thanks for mentioning!

UPDATE 3 – Additional Permissions

Thanks to Math user who identified the problem I added this paragraph below in proper place to make the HOWTO complete. Without it many Bareos daemons would not start with permissions error.

Here is the added paragraph.

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

Β 

EOF

Distributed Object Storage with Minio on FreeBSD

Meet Minio.

minio-logo-arch-32

Free and open source distributed object storage server compatible with Amazon S3 v2/v4 API. Offers data protection against hardware failures using erasure code and bitrot detection. Supports highly available distributed setup. Provides confidentiality, integrity and authenticity assurances for encrypted data with negligible performance overhead. Both server side and client side encryption are supported. Below is the image of example Minio setup.

Web

The Minio identifies itself as the ZFS of Cloud Object Storage. This guide will show You how to setup highly available distributed Minio storage on the FreeBSD operating system with ZFS as backend for Minio data. For convenience we will use FreeBSD Jails operating system level virtualization.

Setup

The setup will assume that You have 3 datacenters and assumption that you have two datacenters in whose the most of the data must reside and that the third datacenter is used as a ‘quorum/witness’ role. Distributed Minio supports up to 16 nodes/drives total, so we may juggle with that number to balance data between desired datacenters. As we have 16 drives to allocate resources on 3 sites we will use 7 + 7 + 2 approach here. The datacenters where most of the data must reside have 7/16 ratio while the ‘quorum/witness’ datacenter have only 2/16 ratio. Thanks to built in Minio redundancy we may loose (turn off for example) any one of those machines and our object storage will still be available and ready to use for any purpose.

Jails

First we will create 3 jails for our proof of concept Minio setup, storage1 will have the ‘quorum/witness’ role while storage2 and storage3 will have the ‘data’ role. To distinguish commands I type on the host system and storageX Jail I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host # command

Command on the storageX Jail.

root@storageX:/ # command

First we will create the base Jails for our setup.

host # mkdir -p /jail/BASE /jail/storage1 /jail/storage2 /jail/storage3
host # cd /jail/BASE
host # fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE/base.txz
host # for I in 1 2 3; do echo ${I}; tar --unlink -xpJf /jail/BASE/base.txz -C /jail/storage${I}; done
1
2
3
host #

We will now add Jails configuration the the /etc/jail.conf file.

I have used my laptop for the Jail host. This is why Jail will configured to use the wireless wlan0 interface and 192.168.43.10X addresses.

host # for I in 1 2 3
do
  cat >> /etc/jail.conf << __EOF
storage${I} {
  host.hostname = storage${I}.local;
  ip4.addr = 192.168.43.10${I};
  interface = wlan0;
  path = /jail/storage${I};
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

__EOF
done
host #

Lets verify that /etc/jail.conf file is configured as desired.

host # cat /etc/jail.conf
storage1 {
  host.hostname = storage1.local;
  ip4.addr = 192.168.43.101;
  interface = wlan0;
  path = /jail/storage1;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage2 {
  host.hostname = storage2.local;
  ip4.addr = 192.168.43.102;
  interface = wlan0;
  path = /jail/storage2;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage3 {
  host.hostname = storage3.local;
  ip4.addr = 192.168.43.103;
  interface = wlan0;
  path = /jail/storage3;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

host #

Now we will start our Jails.

host # for I in 1 2 3; do service jail onestart storage${I}; done
Starting jails: storage1.
Starting jails: storage2.
Starting jails: storage3.

Lets see how they work.

host # jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.101  storage1.local                /jail/storage1
     2  192.168.43.102  storage2.local                /jail/storage2
     3  192.168.43.103  storage3.local                /jail/storage3

Now lets add DNS server so they will have Internet connectivity.

host # for I in 1 2 3; do echo nameserver 1.1.1.1 > /jail/storage${I}/etc/resolv.conf; done

We can now install Minio package.

host # for I in 1 2 3; do jexec storage${I} env ASSUME_ALWAYS_YES=yes pkg install -y minio; echo; done
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage1.local] Installing pkg-1.10.5...
[storage1.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage1.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage1.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage1.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage1.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage1.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage2.local] Installing pkg-1.10.5...
[storage2.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage2.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage2.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage2.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage2.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage2.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage3.local] Installing pkg-1.10.5...
[storage3.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage3.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage3.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage3.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage3.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage3.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

host #

Lets verify that Minio package has installed successfully.

host # for I in 1 2 3; do jexec storage${I} which minio; done
/usr/local/bin/minio
/usr/local/bin/minio
/usr/local/bin/minio
host #

Now we will configure /etc/hosts file.

root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF

We will create directories for Minio data.

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done
host # for DIR in 1 2
do
  for I in 1
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done

Lets verify that that our data directories created successfully.

host # for I in 1 2 3
  do
    echo storage${I}
    jexec storage${I} ls -1 / | grep data
    echo
  done


storage1
data1
data2

storage2
data1
data2
data3
data4
data5
data6
data7

storage3
data1
data2
data3
data4
data5
data6
data7

Basic minio command example.

root@storage1:/ # minio
NAME:
  minio - Cloud Storage Server.

DESCRIPTION:
  Minio is an Amazon S3 compatible object storage server. Use it to store photos, videos, VMs, containers, log files, or any blob of data as objects.

USAGE:
  minio [FLAGS] COMMAND [ARGS...]

COMMANDS:
  server   Start object storage server.
  gateway  Start object storage gateway.
  update   Check for a new software update.
  version  Print version.
  
FLAGS:
  --config-dir value, -C value  Path to configuration directory. (default: "/root/.minio")
  --quiet                       Disable startup information.
  --json                        Output server logs and startup information in json format.
  --help, -h                    Show help.
  
VERSION:
  2018-03-19T19:22:06Z

Now we can generate the list of directories on servers to add as argument for Minio.

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n
http://storage1:9000/data1 \
http://storage1:9000/data2 \
http://storage2:9000/data1 \
http://storage2:9000/data2 \
http://storage2:9000/data3 \
http://storage2:9000/data4 \
http://storage2:9000/data5 \
http://storage2:9000/data6 \
http://storage2:9000/data7 \
http://storage3:9000/data1 \
http://storage3:9000/data2 \
http://storage3:9000/data3 \
http://storage3:9000/data4 \
http://storage3:9000/data5 \
http://storage3:9000/data6 \
http://storage3:9000/data7 \

We can as well just write it down by hand of course πŸ™‚

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

This is out list of data directories that we will use to configure Minio in FreeBSD’s main configuration /etc/rc.conf file.

http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7

Now, lets put Minio settings into the /etc/rc.conf file.

root@storageX:~ # cat > /etc/rc.conf << __EOF 
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
__EOF
root@storageX:~ # 
root@storageX:~ # cat /etc/rc.conf
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
root@storageX:~ #

Now we will start and configure Minio for the first time.

On each storageX server run the following set of commands.

host # jexec storage3
root@storage3:~ # 
root@storage3:/ # rm -rf /http:\*
root@storage3:/ # rm -rf /usr/local/etc/minio
root@storage3:/ # rm -rf /data?/* /data?/.minio.sys
root@storage3:/ # touch                /var/log/minio.log
root@storage3:/ # chown    minio:minio /var/log/minio.log
root@storage3:/ # mkdir -p             /usr/local/etc/minio
root@storage3:/ # chown -R minio:minio /usr/local/etc/minio
root@storage3:/ # mkdir -p             /http::
root@storage3:/ # chown -R minio:minio /http::
root@storage3:/ # mkdir -p             /http:
root@storage3:/ # chown -R minio:minio /http:
root@storage3:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.103:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.103:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.103:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage2
root@storage2:~ # 
root@storage2:/ # rm -rf /http:\*
root@storage2:/ # rm -rf /usr/local/etc/minio
root@storage2:/ # rm -rf /data?/* /data?/.minio.sys
root@storage2:/ # touch                /var/log/minio.log
root@storage2:/ # chown    minio:minio /var/log/minio.log
root@storage2:/ # mkdir -p             /usr/local/etc/minio
root@storage2:/ # chown -R minio:minio /usr/local/etc/minio
root@storage2:/ # mkdir -p             /http::
root@storage2:/ # chown -R minio:minio /http::
root@storage2:/ # mkdir -p             /http:
root@storage2:/ # chown -R minio:minio /http:
root@storage2:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.102:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.102:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.102:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage1
root@storage1:~ # 
root@storage1:/ # rm -rf /http:\*
root@storage1:/ # rm -rf /usr/local/etc/minio
root@storage1:/ # rm -rf /data?/* /data?/.minio.sys
root@storage1:/ # touch                /var/log/minio.log
root@storage1:/ # chown    minio:minio /var/log/minio.log
root@storage1:/ # mkdir -p             /usr/local/etc/minio
root@storage1:/ # chown -R minio:minio /usr/local/etc/minio
root@storage1:/ # mkdir -p             /http::
root@storage1:/ # chown -R minio:minio /http::
root@storage1:/ # mkdir -p             /http:
root@storage1:/ # chown -R minio:minio /http:
root@storage1:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.101:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.101:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.101:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Here is how it looks in the xterm terminal.

minio-first-run-setup

We can now verify in the browser that it actually works.

minio-browser-01

Now hit [CTRL]+[C] in each of these windows to stop the Minio cluster.

We will now start Minio with FreeBSD rc(8) subsystem as a service.

root@storage1:/ # service minio start
Starting minio.
root@storage1:/ # cat /var/log/minio.log 
root@storage1:/ # service minio status
minio is running as pid 50309.

Lets check if it works.

root@storage1:/ # ps -U minio
  PID TT  STAT    TIME COMMAND
50308  -  IsJ  0:00.00 daemon: /usr/bin/env[50309] (daemon)
50309  -  IJ   0:00.27 /usr/local/bin/minio -C /usr/local/etc/minio server (...)

Now we will do some basic operations, login into Minio distributed storage, create new bucket and upload some file to it.

minio-browser-02

This is how empty Minio cluster looks like.

minio-browser-03

Select Create Bucket option from the button below.

minio-browser-04-create-bucket

We will use name test for our new bucket.

minio-browser-05-create-bucket

It is created and we can access it.

minio-browser-06-bucket

Lets Upload File using same menu as previously.

minio-browser-07-file-upload

The upload progress shown by Minio.

minio-browser-08-file-upload

File has been indeed uploaded.

minio-browser-09-file-upload

By clicking on it we may access it directly from the browser.

minio-browser-10-file-display

We can also share link to that file by using the File Menu as shown below.

minio-browser-10-file-link

The link creation dialog is shown below.

minio-browser-11-file-link

minio-browser-12-file-link

Lets see how Minio distributes the data – the ThinkPad Design – Spirit and Essence.pdf file in out case – over its data directories spread across the servers.

host # jexec storage1
root@storage1:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage1:/ # exit
host # jexec storage2
root@storage2:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage2:/ # exit
host # jexec storage3
root@storage3:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage3:/ # exit

We can also see what Minio configuration file /usr/local/etc/minio/config.json has been generated.

host # jexec storage1
root@storage1:/ # cat /usr/local/etc/minio/config.json 
{
        "version": "22",
        "credential": {
                "accessKey": "alibaba",
                "secretKey": "0P3NS3S4M3"
        },
        "region": "",
        "browser": "on",
        "domain": "",
        "storageclass": {
                "standard": "",
                "rrs": ""
        },
        "notify": {
                "amqp": {
                        "1": {
                                "enable": false,
                                "url": "",
                                "exchange": "",
                                "routingKey": "",
                                "exchangeType": "",
                                "deliveryMode": 0,
                                "mandatory": false,
                                "immediate": false,
                                "durable": false,
                                "internal": false,
                                "noWait": false,
                                "autoDeleted": false
                        }
                },
                "elasticsearch": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "url": "",
                                "index": ""
                        }
                },
                "kafka": {
                        "1": {
                                "enable": false,
                                "brokers": null,
                                "topic": ""
                        }
                },
                "mqtt": {
                        "1": {
                                "enable": false,
                                "broker": "",
                                "topic": "",
                                "qos": 0,
                                "clientId": "",
                                "username": "",
                                "password": "",
                                "reconnectInterval": 0,
                                "keepAliveInterval": 0
                        }
                },
                "mysql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "dsnString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "nats": {
                        "1": {
                                "enable": false,
                                "address": "",
                                "subject": "",
                                "username": "",
                                "password": "",
                                "token": "",
                                "secure": false,
                                "pingInterval": 0,
                                "streaming": {
                                        "enable": false,
                                        "clusterID": "",
                                        "clientID": "",
                                        "async": false,
                                        "maxPubAcksInflight": 0
                                }
                        }
                },
                "postgresql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "connectionString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "redis": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "address": "",
                                "password": "",
                                "key": ""
                        }
                },
                "webhook": {
                        "1": {
                                "enable": false,
                                "endpoint": ""
                        }
                }
        }

S3FS

We can also mount that test bucket from out distributed Minio object storage cluster as filesystem using the S3FS project. Lets add s3fs package and mount our bucket.

host # pkg install -y fusefs-s3fs

Now we will configure password for our bucket.

host # echo test:alibaba:0P3NS3S4M3 > /root/.passwd-s3fs
host # chmod 600 /root/.passwd-s3fs
host # cat /root/.passwd-s3fs 
test:alibaba:0P3NS3S4M3

Now lets do the actual mount.

host # mkdir /tmp/test
host # s3fs \
  -o allow_other \
  -o use_path_request_style \
  -o url=http://192.168.43.101:9000 \
  -o passwd_file=/root/.passwd-s3fs \
  test /tmp/test

The file ThinkPad Design – Spirit and Essence.pdf that we put through web interface should be here.

host # exa -l /tmp/test
.--------- 10M root 2018-04-16 14:15 ThinkPad Design - Spirit and Essence.pdf

host # file /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf 
/tmp/test/ThinkPad Design - Spirit and Essence.pdf: PDF document, version 1.4

host # stat /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf
3976265496 2 ---------- 1 root wheel 0 10416953 "Jan  1 01:00:00 1970" "Apr 16 14:35:35 2018" "Jan  1 01:00:00 1970" "Jan  1 00:59:59 1970" 4096 20346 0 /tmp/test/ThinkPad Design - Spirit and Essence.pdf

We can now upload other file into that bucket using s3fs mount.

host # cp -v /home/vermaden/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf /tmp/test
/home/vermaden/On the Shortness of Life - Lucius Seneca.pdf -> /tmp/test/On the Shortness of Life - Lucius Seneca.pdf

host # file /tmp/test/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf 
On the Shortness of Life - Lucius Seneca.pdf: PDF document, version 1.4

We can also verify that our file put through s3fs is visible on the web interface.

minio-browser-13-s3fs-upload

Real Hardware

Now, as we have working Proof of Concept for the distributed Minio setup how about putting it on a real hardware for real storage purposes? I would setup a 16 node Minio distributed server on a Supermicro SSG-5018D8-AR12L hardware. Supermicro even suggests using that kind of servers for object storage, here is their white paper on that topic – Object Storage Solution for Data Archive using Supermicro SSG-5018D8-AR12L and OpenIO SDS – but they use OpenIO not Minio for distributed object storage solution.

This server features the Supermicro X10SDV-7TP4F motherboard. This is important as this motherboard officially supports FreeBSD 11.x operating system on their Supermicro OS Compatibility page.

Motherboard specification has these features.

 1 x Intel Xeon D-1537 8-Core / 16-Threads TDP 35W
 4 x UDIMM for up to 128GB ECC RDIMM DDR4 2133MHz
12 x 3.5" SAS2/SATA3 Hot-Swap HDD Bays
 4 x 2.5" Cold-Swap HDD Bays
 1 x Controller Intel SoC for 4 SATA3 (6Gbps) Ports
 1 x Controller Broadcom 2116 for 16 SATA3 (6Gbps) Ports
 1 x Expansion Slot PCI-E 3.0 x8 
 1 x Expansion Slot M.2 PCIe 3.0 x4
 1 x Expansion Slot Mini-PCIe w/ mSATA Support
 2 x 10G SFP+ Port
 2 x 1GbE LAN Port
 2 x External USB 3.0 Port
 1 x Interlal USB 2.0 Port
 2 x 400W High-Rfficiency Redundant Power Supplies

You can configure your own and get approximated price using the Thinkmate site from here:
https://www.thinkmate.com/system/superstorage-server-5018d8-ar12l

I would add this components to the basic setup:

 4 x UDIMM FULL 128 GB ECC RDIMM DDR4
 2 x 240GB Micron 5100 MAX 2.5" SATA 6.0Gb/s SSD
 2 x 7.68TB Micron 5200 ECO Series 2.5" SATA 6.0Gb/s SSD
12 x 12TB SATA 6.0Gb/s 7200RPM 3.5" Hitachi Ultrastarβ„’ He12
 3 x SanDisk Cruzer Fit 32GB USB 3.0

Now, I will use the 3 x SanDisk Cruzer Fit 32GB USB 3.0 disks to install FreeBSD as a ZFS root/boot pool with mirror + spare on these disks. We do not need performance here.

Then, the 12 x 12TB SATA 6.0Gb/s 7200RPM 3.5″ Hitachi Ultrastarβ„’ He12 drives will be used as RAIDZ (RAID5 equivalent in ZFS without the write hole) for the Minio data, wich 11 + 1 setup, which means 11 drives for data and 1 drive for parity. As we can lose HALF of the Minio servers I would not waste 12 TB drive for spare here. Then, I would use 2 x 240GB Micron 5100 MAX 2.5″ SATA 6.0Gb/s SSD in mirror for the ZFS ZIL (ZFS Intent Log) to accelerate writes and 2 x 7.68TB Micron 5200 ECO Series 2.5″ SATA 6.0Gb/s SSD for the ZFS read cache (L2ARC).

The network would be setup on 2 x 10G SFP+ Port with LACP as lagg0 interface so each server would have 20 Gbit connectivity. This will give us a total of 320 Gbit theoretical network throughput.

This setup would give as 132 TB ZFS pool space with 15 TB for read cache and 240 GB for writes for single 1U server. Making the calculations this will give as 2112 TB (more then 2 PB) of space for Minio data.

With Minio algorithm for data redundancy we will have about 1 PB of usable storage space in our 16U Object Storage FreeBSD Appliance.

Not bad for my taste πŸ™‚

UPDATE 1

The Distributed Object Storage with Minio on FreeBSD article was included in the BSD Now 246 – Disclosure episode.

Thanks for mentioning!

EOF