Tag Archives: storage

Silent Fanless FreeBSD Server – Redundant Backup

I brought up this topic in the past. It was in the form of more theoretical Silent Fanless FreeBSD Desktop/Server post and more hands-on Silent Fanless FreeBSD Server – DIY Backup article.

One of the comments after the latter was that I compared non-redundant backup solution (single disk) to redundant backup in the cloud. Today – as this is my main backup system – I would like to show you redundant backup solution with two disks in ZFS mirror along with real power usage measurements. This time I got ASRock J3355B-ITX motherboard with only 10W TDP which includes 2-core Celeron J3355 2.0-2.5 GHz CPU and small shiny REALAN H80 Mini ITX case. It looks very nice and comes from AliExpress at very low $33 price for new unit along with free shipping.

Build

Here is how the REALAN H80 case looks like.

realan-H80-render

The ASRock J3355B-ITX motherboard.

asrock-J3355B-ITX.jpg

Same as with the earlier build the internal Seagate BarraCuda 5TB 2.5 SATA drives costs about $200. The same Seagate Backup Plus 5TB 2.5 disk in external case with USB 3.0 port costs nearly half of that price – only $120 – at least in the Europe/Poland location. I took the decision to buy external ones and rip off their cases. That saved me about $160.

Here is the simple performance benchmark of these 2.5 disks.

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

% pv  /dev/null
1.35GiB 0:00:10 [ 137MiB/s] [ 137MiB/s]
^C

% dd  /dev/null bs=8M
127+0 records in
127+0 records out
1065353216 bytes transferred in 7.494081 secs (142159287 bytes/sec)
^C

About 135MB/s per disk.

The ripped of parts of Seagate Backup Plus USB cases.

external-case-parts.jpg

What made me laugh was that as I got different cases colors (silver and gray) the disks inside also had different colors (green and blue) :>

disks-bottom

… but their part number is the same, here they are mounted on a REALAN H80 disks holder.

disks-mounted

For the record – several REALAN H80 case real shots (not renders). First its front.

realan-H80-front

Back.

realan-H80-back.jpg

Side with USB port.

realan-H80-side-usb

Bottom.

realan-H80-bottom.jpg

Top.

realan-H80-top

Case parts.

realan-H80-parts.jpg

Generally the REALAN H80 looks really nice. Little lower REALAN H60 (without COM slots/holes in the back) looks even better but I wanted to make sure that I will have room and space for hot air in that case – as space was not a problem for me.

Cost

The complete price tops at $220 total. Here are the parts used.

PRICE  COMPONENT
  $49  CPU/Motherboard ASRock J3355B-ITX Mini-ITX
  $10  RAM 4GB DDR3
  $13  PSU 12V 7.5A 90W Pico (internal)
   $2  PSU 12V 2.5A 30W Leader Electronics (external)
  $33  Supermicro SC101i
   $3  SanDisk Fit 16GB USB 2.0 Drive (system)
 $120  Seagate 5TB 2.5 drive (ONE)
 $120  Seagate 5TB 2.5 drive (TWO)
 $350  TOTAL

That is $110 for the ‘system’ and additional $240 for ‘data’ drives.

Today I would probably get the ASRock N3150DC-ITX or Gigabyte GA-N3160TN motherboard instead because of builtin DC jack slot (compatible with 19V power adapter) on its back. This will eliminate the need for additional internal Pico PSU power supply …

The ASRock N3150DC-ITX with builtin DC jack.

asrock-N3150DC-ITX.jpg

The Gigabyte GA-N3160TN with builtin DC jack.

gigabyte-GA-N3160TN.jpg

The Gigabyte GA-N3160TN is also very low profile motherboard as you can see from the back.

gigabyte-GA-N3160TN-back-other.jpg

It may be good idea to use this one instead ASRock N3150DC-ITX to get more space above the motherboard.

ย 

PSU

As in the earlier Silent Fanless FreeBSD Server – DIY Backup article I used small 12V 2.5A 30W compact and cheap external PSU instead of the large 90W PSU from FSP Group. As these low power motherboard does not need a lot of power.

New Leader Electronics PSU label.

silent-backup-psu-ext-label.jpg

The internal power supply is Pico PSU which now tops as 12V 7.5A 90W power.

silent-backup-psu-pico-12V-90W.jpg

Power Consumption

I also measured the power consumption with power meter.

silent-backup-power-meter.jpg

The whole box with two Seagate BarraCuda 5TB 2.5 drives for data on ZFS mirror and SanDisk 16GB USB 2.0 system drive used about 10.4W in idle state.

I used all needed settings from my earlier The Power to Serve โ€“ FreeBSD Power Management article with CPU speed limited between 0.4GHz and 1.2GHz.

The powerd(8) settings in the /etc/rc.conf file are below.

powerd_flags="-n hiadaptive -a hiadaptive -b hiadaptive -m 400 -M 1200"

I used python(1) [1] to load the CPU and dd(8) to load the drives. I used dd(8) on the ZFS pool so 1 disk thread will read [2] and write [3] from/to both 2.5 disks. I temporary disabled LZ4 compression for the write tests.

[1] # echo '999999999999999999 ** 999999999999999999' | python
[2] # dd  /dev/null bs=1M
[3] # dd > /data/FILE < /dev/zero bs=1M
POWER   CPU LOAD         I/O LOAD
10.4 W  IDLE             IDLE
12.9 W  IDLE             1 DISK READ Thread(s)
14.3 W  IDLE             1 DISK READ Thread(s) + 1 DISK WRITE Thread(s)
17.2 W  IDLE             3 DISK READ Thread(s) + 3 DISK WRITE Thread(s)
11.0 W  8 CPU Thread(s)  IDLE
13.4 W  8 CPU Thread(s)  1 DISK READ Thread(s)
15.0 W  8 CPU Thread(s)  1 DISK READ Thread(s) + 1 DISK WRITE Thread(s)
17.8 W  8 CPU Thread(s)  3 DISK READ Thread(s) + 3 DISK WRITE Thread(s)

That’s not much remembering that 6W TDP power motherboard ASRock N3150B-ITX with just single Maxtor M3 4TB 2.5 USB 3.0 drive used 16.0W with CPU and I/O loaded. Only 1.8W more (on loaded system) with redundancy on two 2.5 disks.

Commands

The crypto FreeBSD kernel module was able to squeeze about 68MB/s of random data from /dev/random as this CPU has built in hardware AES-NI acceleration. Note to Linux users – the /dev/random and /dev/urandom are the same thing on FreeBSD. I used both dd(8) and pv(1) commands for this simple test. I made two tests with powerd(8) enabled and disabled to check the difference between CPU speed at 1.2GHz and at 2.5GHz with Turbo mode.

Full speed with Turbo enabled (note 2001 instead of 2000 for CPU frequency)..

# /etc/rc.d/powerd stop
Stopping powerd.
Waiting for PIDS: 1486.

% sysctl dev.cpu.0.freq
dev.cpu.0.freq: 2001

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

% dd  /dev/null
1.91GiB 0:00:31 [68.7MiB/s] [68.1MiB/s]
265+0 records in
265+0 records out
2222981120 bytes transferred in 33.566154 secs (70226864 bytes/sec)
^C

CPU limited to 1.2GHz with powerd(8) daemon was able to squeeze about 24MB/s.

# service powerd start
Starting powerd.

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

% dd  /dev/null
568MiB 0:00:23 [25.3MiB/s] [24.7MiB/s]
71+0 records in
71+0 records out
595591168 bytes transferred in 23.375588 secs (25479195 bytes/sec
^C

Below I will show you the data from dmesg(8) about the used USB and 2.5 drives.

The dmesg(8) information for the SanDisk Fit USB 2.0 16GB drive.

# grep da0 /var/run/dmesg.boot
da0 at umass-sim1 bus 1 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530002030502100093
da0: 400.000MB/s transfers
da0: 14663MB (30031250 512 byte sectors)
da0: quirks=0x2

… and two Seagate BarraCuda 5TB 2.5 drives.

# grep ada /var/run/dmesg.boot
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0:  ACS-3 ATA SATA 3.x device
ada0: Serial Number WCJ0DRJE
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 4769307MB (9767541168 512 byte sectors)
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1:  ACS-3 ATA SATA 3.x device
ada1: Serial Number WCJ0213S
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 4769307MB (9767541168 512 byte sectors)

The whole /var/run/dmesg.boot content (without disks) is shown below.

# cat /var/run/dmesg.boot
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 11.2-RELEASE-p7 #0: Tue Dec 18 08:29:33 UTC 2018
    root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
FreeBSD clang version 6.0.0 (tags/RELEASE_600/final 326565) (based on LLVM 6.0.0)
VT(vga): resolution 640x480
CPU: Intel(R) Celeron(R) CPU J3355 @ 2.00GHz (1996.88-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x506c9  Family=0x6  Model=0x5c  Stepping=9
  Features=0xbfebfbff
  Features2=0x4ff8ebbf
  AMD Features=0x2c100800
  AMD Features2=0x101
  Structured Extended Features=0x2294e283
  XSAVE Features=0xf
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
  TSC: P-state invariant, performance statistics
real memory  = 4294967296 (4096 MB)
avail memory = 3700518912 (3529 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: 
WARNING: L1 data cache covers less APIC IDs than a core
0 < 1
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
ioapic0  irqs 0-119 on motherboard
SMP: AP CPU #1 Launched!
Timecounter "TSC" frequency 1996877678 Hz quality 1000
random: entropy device external interface
kbd1 at kbdmux0
netmap: loaded module
module_register_init: MOD_LOAD (vesa, 0xffffffff80ff4580, 0) error 19
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
nexus0
vtvga0:  on motherboard
cryptosoft0:  on motherboard
acpi0:  on motherboard
unknown: I/O range not supported
cpu0:  on acpi0
cpu1:  on acpi0
attimer0:  port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0:  port 0x70-0x77 on acpi0
atrtc0: Warning: Couldn't map I/O.
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
hpet0:  iomem 0xfed00000-0xfed003ff irq 8 on acpi0
Timecounter "HPET" frequency 19200000 Hz quality 950
Event timer "HPET" frequency 19200000 Hz quality 550
Event timer "HPET1" frequency 19200000 Hz quality 440
Event timer "HPET2" frequency 19200000 Hz quality 440
Event timer "HPET3" frequency 19200000 Hz quality 440
Event timer "HPET4" frequency 19200000 Hz quality 440
Event timer "HPET5" frequency 19200000 Hz quality 440
Event timer "HPET6" frequency 19200000 Hz quality 440
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0:  port 0x408-0x40b on acpi0
pcib0:  port 0xcf8-0xcff on acpi0
pci0:  on pcib0
vgapci0:  port 0xf000-0xf03f mem 0x90000000-0x90ffffff,0x80000000-0x8fffffff irq 19 at device 2.0 on pci0
vgapci0: Boot video device
hdac0:  mem 0x91210000-0x91213fff,0x91000000-0x910fffff irq 25 at device 14.0 on pci0
pci0:  at device 15.0 (no driver attached)
ahci0:  port 0xf090-0xf097,0xf080-0xf083,0xf060-0xf07f mem 0x91214000-0x91215fff,0x91218000-0x912180ff,0x91217000-0x912177ff irq 19 at device 18.0 on pci0
ahci0: AHCI v1.31 with 2 6Gbps ports, Port Multiplier supported
ahcich0:  at channel 0 on ahci0
ahcich1:  at channel 1 on ahci0
pcib1:  irq 22 at device 19.0 on pci0
pci1:  on pcib1
pcib2:  irq 20 at device 19.2 on pci0
pci2:  on pcib2
re0:  port 0xe000-0xe0ff mem 0x91104000-0x91104fff,0x91100000-0x91103fff irq 20 at device 0.0 on pci2
re0: Using 1 MSI-X message
re0: Chip rev. 0x4c000000
re0: MAC rev. 0x00000000
miibus0:  on re0
rgephy0:  PHY 1 on miibus0
rgephy0:  none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow
re0: Using defaults for TSO: 65518/35/2048
re0: Ethernet address: 70:85:c2:3f:53:41
re0: netmap queues/slots: TX 1/256, RX 1/256
xhci0:  mem 0x91200000-0x9120ffff irq 17 at device 21.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
isab0:  at device 31.0 on pci0
isa0:  on isab0
acpi_button0:  on acpi0
acpi_tz0:  on acpi0
atkbdc0:  at port 0x60,0x64 on isa0
atkbd0:  irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
ppc0: cannot reserve I/O port range
est0:  on cpu0
est1:  on cpu1
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
hdacc0:  at cad 0 on hdac0
hdaa0:  at nid 1 on hdacc0
ugen0.1:  at usbus0
uhub0:  on usbus0
pcm0:  at nid 21 and 24,26 on hdaa0
pcm1:  at nid 20 and 25 on hdaa0
pcm2:  at nid 27 on hdaa0
hdacc1:  at cad 2 on hdac0
hdaa1:  at nid 1 on hdacc1
pcm3:  at nid 3 on hdaa1
uhub0: 15 ports with 15 removable, self powered
ugen0.2:  at usbus0
uhub1 on uhub0
uhub1:  on usbus0
uhub1: 4 ports with 4 removable, self powered
Trying to mount root from zfs:zroot/ROOT/default []...
random: unblocking device.
re0: link state changed to DOWN

ZFS Pool Configuration

To get higher LZ4 compression ratio I use larger blocksize (1MB) on this ZFS mirror pool. Here is the ZFS pool status.

% zpool status data
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 44h14m with 0 errors on Mon Feb 11 07:13:42 2019
config:

        NAME                STATE     READ WRITE CKSUM
        data                ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            label/WCJ0213S  ONLINE       0     0     0
            label/WCJ0DRJE  ONLINE       0     0     0

errors: No known data errors

I get 4% compression (1.04x) on that ZFS pool. Its about 80% filled with lots of movies and photos so while such compression ratio may not be great it gives a lot of space. For example 4% of 4TB of data is about 160GB of ‘free’ space.

% zfs get compressratio data
NAME                                    PROPERTY       VALUE  SOURCE
data                                    compressratio  1.04x  -

Here is the ZFS pool configuration.

# zpool history
History for 'data':
2018-11-12.01:18:33 zpool create data mirror /dev/label/WCJ0229Z /dev/label/WCJ0DPHF
2018-11-12.01:19:11 zfs set mountpoint=none data
2018-11-12.01:19:16 zfs set compression=lz4 data
2018-11-12.01:19:21 zfs set atime=off data
2018-11-12.01:19:34 zfs set primarycache=metadata data
2018-11-12.01:19:40 zfs set secondarycache=metadata data
2018-11-12.01:19:45 zfs set redundant_metadata=most data
2018-11-12.01:19:51 zfs set recordsize=1m data
(...)

We do not need redundant_metadata as we already have two disks, its useful only on single disks configurations.

Self Solution Cost

As in the earlier post I will again calculate how much energy this server would consume. Currently 1kWh of power costs about $0.20 in Europe/Poland (rounded up). This means that running computer with 1000W power usage for 1 hour would cost you $0.20 on electricity bill. This system uses 10.4W idle and 12.9W when single disk read occurs. For most of the time server will be idle so I assume 11.0W average for the pricing purposes.

That would cost us $0.0022 for 11.0W device running for 1 hour.

Below you will also find calculations for 1 day (24x multiplier), 1 year (another 365.25x multiplier) and 3 years (another 3x multiplier).

   COST  TIME
$0.0022  1 HOUR(S)
$0.0528  1 DAY(S)
$19.285  1 YEAR(S)
$57.856  3 YEAR(S)
$96.426  5 YEAR(S)

Combining that with server cost ($350) we get TCO for our self hosted 5TB storage service.

   COST  TIME
$369.29  1 YEAR(S)
$407.86  3 YEAR(S)
$446.43  5 YEAR(S)

Our total 3 years TCO is $407.86 and 5 years is $446.43. Its for running system non-stop. We can also implement features like Wake On LAN to limit that power usage even more.

Cloud Storage Prices

This time after searching for cheapest cloud based storage I found these services.

  • Amazon Drive
  • Amazon S3 Glacier Storage
  • Backblaze B2 Cloud Storage
  • Google One

Here is its cost summarized for 1 year period for 5TB of data.

PRICE  TIME       SERVICE
 $300  1 YEAR(S)  Amazon Drive
 $310  1 YEAR(S)  Google One
 $240  1 YEAR(S)  Amazon S3 Glacier Storage
 $450  1 YEAR(S)  Backblaze B2 Cloud Storage

For the Backblaze B2 Cloud Storage I assumed average between upload/download price because upload is two times cheaper then download.

Here is its cost summarized for 3 year period for 5TB of data.

PRICE  TIME       SERVICE
 $900  3 YEAR(S)  Amazon Drive
 $930  3 YEAR(S)  Google One
 $720  3 YEAR(S)  Amazon S3 Glacier Storage
$1350  3 YEAR(S)  Backblaze B2 Cloud Storage

Here is its cost summarized for 5 year period for 5TB of data.

PRICE  TIME       SERVICE
$1500  5 YEAR(S)  Amazon Drive
$1550  5 YEAR(S)  Google One
$1200  5 YEAR(S)  Amazon S3 Glacier Storage
$2250  5 YEAR(S)  Backblaze B2 Cloud Storage

Now lets compare costs of our own server to various cloud services.

If we would run our server for just 1 year the price will be similar.

PRICE  TIME       SERVICE
 $369  1 YEAR(S)  Self Build NAS
 $300  1 YEAR(S)  Amazon Drive
 $310  1 YEAR(S)  Google One
 $240  1 YEAR(S)  Amazon S3 Glacier Storage
 $450  1 YEAR(S)  Backblaze B2 Cloud Storage

It gets interesting when we compare 3 years costs. Its two times cheaper to self host our own server then use cloud services. One may argue that clouds are located in many places but even if we would buy two such boxes and put one – for example in our friends place at Jamaica – or other parts of the world.

PRICE  TIME       SERVICE
 $408  3 YEAR(S)  Self Build NAS
 $528  3 YEAR(S)  Self Build NAS (assuming one of the drives failed)
 $900  3 YEAR(S)  Amazon Drive
 $930  3 YEAR(S)  Google One
 $720  3 YEAR(S)  Amazon S3 Glacier Storage
$1350  3 YEAR(S)  Backblaze B2 Cloud Storage

… but with 5 years using cloud service instead of self hosted NAS solution is 3-5 times more expensive … and these were the cheapest cloud services I was able to find. I do not even want to know how much would it cos on Dropbox for example ๐Ÿ™‚

PRICE  TIME       SERVICE
 $447  5 YEAR(S)  Self Build NAS
 $567  5 YEAR(S)  Self Build NAS (assuming one of the drives failed)
$1500  5 YEAR(S)  Amazon Drive
$1550  5 YEAR(S)  Google One
$1200  5 YEAR(S)  Amazon S3 Glacier Storage
$2250  5 YEAR(S)  Backblaze B2 Cloud Storage

… and ‘anywhere’ access is not an argument for cloud services because you can get external IP address for you NAS or use Dynamic DNS – for free. You may also wonder why I compare such ‘full featured NAS’ with S3 storage … well with rclone (rsync for cloud storage) you are able to synchronize your files with almost anything ๐Ÿ™‚

Not to mention how much more privacy you have with keeping all your data to yourself … but that is priceless.

You can also setup a lot more services on such hardware – like FreeNAS with Bhyve/Jails virtualization … or Nextcloud instance … or Syncthing … while cloud storage is only that – a storage in the cloud.

Summary

Not sure what else could I include in this article. If you have an idea what else could I cover then let me know.

EOF

ย 

Advertisements

Valuable News – 2019/02/25

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX or BSD systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here so someone else can

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

UNIX

The Scanned versus Issued numbers for ZFS Scrubs (and Resilvers).
https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSScrubScannedVsIssued

The rclone is command line program to sync files and directories to and from many cloud services.
https://rclone.org/

Make ZFS Snapshots work with Samba as Windows Shadow Copies (VSS).
https://github.com/zfsonlinux/zfs-auto-snapshot/wiki/Samba

Dynamically linked binaries will be built as PIE on FreeBSD.
https://svnweb.freebsd.org/base?view=revision&revision=344179

Book of Secret Knowledge.
Collection of awesome lists/manuals/blogs/hacks/one liners/cli/web tools and more.
https://github.com/trimstray/the-book-of-secret-knowledge

FreeBSD ZFS AMIs Now Available.
http://www.daemonology.net/blog/2019-02-16-FreeBSD-ZFS-AMIs-now-available.html

OpenBSD Desktop Using Window Maker.
https://www.tumfatig.net/20190215/an-openbsd-desktop-using-windowmaker/

Wine Developers Release Hangover Alpha to Run Windows x86_64 Programs on 64-Bit ARM.
https://www.phoronix.com/scan.php?page=news_item&px=Hangover-0.4-Alpha-Released

Unix Architecture Diagrams – Modern FreeBSD.
https://dspinellis.github.io/unix-architecture/arch.pdf

Hacker News Discussion – Why don’t companies use FreeBSD as much in production as Linux?
https://news.ycombinator.com/item?id=12199394

Pinboard tags for various BSDs.
Social Bookmarking for Introverts.
https://pinboard.in/t:freebsd/
https://pinboard.in/t:openbsd/
https://pinboard.in/t:netbsd/
https://pinboard.in/t:dragonflybsd/
https://pinboard.in/t:bsd/

Bastille – Quickly Create and Manage FreeBSD Jails.
https://bastillebsd.org/
https://freshports.org/sysutils/bastille

FreeBSD Starter Kit.
https://github.com/BastilleBSD/starterkit

FreeBSD nsysctl Tutorial.
https://alfix.gitlab.io/bsd/2019/02/19/nsysctl-tutorial.html

OpenBSD and iSCSI Part 1 – Target (Server).
https://dataswamp.org/~solene/2019-02-21-iscsi-server.html

OpenBSD and iSCSI Part 2 – Initiator (Client).
https://dataswamp.org/~solene/2019-02-21-iscsi-client.html

New Bhyve on FreeBSD vCPU limit will be 254.
https://reviews.freebsd.org/D18815

FreeBSD adds kernel support for Intel userspace protection keys feature on Skylake Xeons.
https://svnweb.freebsd.org/base?view=revision&revision=344353

Looking at MySQL 8 with PostgreSQL Goggles On.
https://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/

FreeNAS 11.2-U2 Available.
https://www.ixsystems.com/blog/library/freenas-11-2-u2/

Free Algorithms book by Jeff Erickson.
http://jeffe.cs.illinois.edu/teaching/algorithms/#book

Mutatio is simple script to keep to date with OpenBSD updates and to download new snapshots.
https://github.com/joedicastro/mutatio

PXE Booting of FreeBSD Disk Image.
https://blog.cochard.me/2019/02/pxe-booting-of-freebsd-disk-image.html

XigmaNAS 11.2.0.4.6536 Available.
https://sourceforge.net/projects/xigmanas/files/XigmaNAS-11.2.0.4/11.2.0.4.6536/

XigmaNAS 12.0.0.4.6536 Beta Available.
https://sourceforge.net/projects/xigmanas/files/XigmaNAS-Beta/XigmaNAS-12.0.0.4.6536/

HAXM in pkgsrc.
HAXM is hardware-assisted virtualization engine (hypervisor).
https://mail-index.netbsd.org/netbsd-users/2019/02/13/msg022207.html
https://blog.netbsd.org/tnf/entry/the_hardware_assisted_virtualization_challenge

FreeBSD Imports Linux debugfs Support.
https://svnweb.freebsd.org/base?view=revision&revision=344485

FreeBSD Find Out All Installed Hard Disk Information.
https://www.cyberciti.biz/faq/freebsd-hard-disk-information/

In Other BSDs for 2019/02/23.
https://www.dragonflydigest.com/2019/02/23/22569.html

In Other BSDs for 2019/02/16.
https://www.dragonflydigest.com/2019/02/16/22529.html

Hardware

Western Digital RISC-V SweRV Core Design Released for Free.
https://www.anandtech.com/show/13964/western-digitals-riscv-swerv-core-released-for-free

The Last POWER1 CPU on Mars is Dead.
https://www.talospace.com/2019/02/the-last-power1-on-mars-is-dead.html

Lasers vs. Microwaves – Billion Dollar Bet on the Future of Magnetic Storage.
Seagate and Western Digital are pursuing rival technologies to push limits of hard disks.
https://spectrum.ieee.org/computing/hardware/lasers-vs-microwaves-the-billiondollar-bet-on-the-future-of-magnetic-storage

AMD EPYC 3201 8-Core 30W Benchmarks Review and Milestone.
https://www.servethehome.com/amd-epyc-3201-8-core-benchmarks-review-and-milestone/

Samsung 983 ZET (Z-NAND) SSD Review.
https://www.anandtech.com/show/13951/the-samsung-983-zet-znand-ssd-review/

History – SUN Modular Data Center also known as Project Blackbox.
https://gigazine.net/gsc_news/en/20061018_blackbox/
http://data-centers.in/portable-data-center/

First Intel 4.0 GHz – Pentium Gold G5620 at Retail.
https://www.anandtech.com/show/13976/intels-first-40-ghz-pentium-pentium-gold-g5620-listed-at-retail

Journey to Next Gen ARM Neoverse N1 and E1 Cores.
https://www.servethehome.com/arm-neoverse-n-e-tech-day/

AMD Hiring 10 More People for Their Open Source Linux Driver Team.
https://www.phoronix.com/scan.php?page=news_item&px=AMD-Hiring-10-More-Open-Source

Samsung Galaxy Fold – First Folding Smartphone.
https://www.anandtech.com/show/13981/samsung-announces-the-galaxy-fold-the-first-folding-smartphone

Supermicro making push into high end gaming motherboards.
https://www.zdnet.com/article/supermicro-making-a-push-into-high-end-gaming-motherboards/

Intel believe that ARM Macs could come as soon as 2020.
https://appleinsider.com/articles/19/02/21/intel-officials-believe-that-arm-macs-could-come-as-soon-as-2020

Apple move to ARM based Macs creates uncertainty.
https://www.axios.com/apple-macbook-arm-chips-ea93c38a-d40a-4873-8de9-7727999c588c.html

Toshiba Collaborates with Showa Denko for MAMR 18 TB HDDs.
https://www.anandtech.com/show/13991/toshiba-collaborates-with-showa-denko-for-mamr-hdds

Life

Why I hate the weekends…
http://www.cdahmedeh.net/blog/2017/4/15/why-i-hate-the-weekends

Study finds no evidence cough medicines work with 1/7 patients experiencing negative side effects.
https://www.independent.co.uk/news/health/cough-medicine-work-help-persistent-symptoms-weeks-asthma-a8531286.html

How We Lost Our Ability to Mend.
https://dieworkwear.com/post/182126040434/how-we-lost-our-ability-to-mend

Four day week trial – study finds lower stress but no cut in output.
https://www.theguardian.com/money/2019/feb/19/four-day-week-trial-study-finds-lower-stress-but-no-cut-in-output

Google says the builtin microphone it never told Nest users about was ‘never supposed to be a secret’.
https://www.businessinsider.com/nest-microphone-was-never-supposed-to-be-a-secret-2019-2?IR=T

How Jan and Martina Died.
https://www.occrp.org/en/unfinishedlives/how-jan-and-martina-died

Other

By Summer 2019 Firefox will Block by Default All Cross-Site Third-Party Trackers.
https://twitter.com/jensimmons/status/1098335173089873920

EOF

 

Silent Fanless FreeBSD Server – DIY Backup

I already once wrote about this topic at the Silent Fanless FreeBSD Desktop/Server article. To my pleasant surprise BSD NOW Episode 253: Silence of the Fans featured my article for which I am very grateful. Today I would like to show another practical example of such setup and with more hands on approach along with real power usage measurements with power meter. I also got more power efficient ASRock N3150B-ITX motherboard with only 6W TDP which includes 4-core Celeron N3150 CPU and also nice small Supermicro SC101i Mini ITX case. Keep in mind that ASRock also made very similar N3150-ITX motherboard (no ‘B’ in model name) with different ports/connectors that may better suit your needs better.

Build

Here is how the Supermicro SC101i case looks like with ASRock N3150B-ITX motherboard installed.

silent-backup-case-external.jpg

silent-backup-case-back.jpg

One thing that surprised me very much was the hard disk cost. The internal Seagate 4TB ST4000LM024 2.5 SATA drive costs about $180-190 but the same disk sold as Maxtor M3 4TB 2.5 disk in external case with Maxtor brand (which is owned by Seagate anyway) and USB 3.0 port costs half of that – about $90-100. At least in Europe/Poland location.

I think you do already know where I am going with my thoughts. I will use an external Maxtor M3 4TB 2.5 drive and connect it via the USB 3.0 port in this setup. While SATA III provides theoretical throughput of 6Gbps the USB 3.0 provides 5Gbps theoretical throughput. The difference can be important for low latency high throughput SSD drives that approach 580MB/s speed but not for traditional rotational disks moving gently at 5400RPM.

The maximum performance I was able to squeeze from this Maxtor M3 4TB 2.5 USB 3.0 drive was 90MB/s write speed and 120MB/s read speed using pv(1) tool, and that was at the beginning of the disk. These speeds will drop to about 70MB/s and 90MB/s at the end of the disk respectively for write and read operations. We are not even approaching SATA I standard here which tops at 1.5Gbps. Thus it will not make a difference or not a significant one for sure for such storage.

At first I wanted to make a hole on the motherboard end steel plate (somewhere beside the back ports) with drill to get outside with USB cable from the case and attach it to one of the USB 3.0 ports at the back of the motherboard but fortunately I got better idea. This motherboard has connector for internal USB 3.0 (so called front panel USB on the case) so I bought Akyga AK-CA-57 front panel cable with USB 3.0 port and connected everything inside the case.

This is the Akyga AK-CA-57 USB 3.0 cable.

silent-backup-usb-akyga-cable-AK-CA-57.jpg

If I was going to install two USB 3.0 disks using this method I would use one of these cables instead:

The only problem can be more physical one – will it blend will it fit? Fortunately I was able to find a way to fit it in the case and there is even space for the second disk. As this will be my offsite backup replacement which is only 3rd stage/offsite backup I do not need to create redundant mirror/RAID1 protection but it’s definitely possible with two Maxtor M3 4TB 2.5 USB 3.0 drives.

The opened Supermicro SC101i case with ASRock N3150B-ITX motherboard inside and attached Pico PSU looks like that.

silent-backup-mobo-case.jpg

With attached Akyga AK-CA-57 USB 3.0 cable things get little narrow, but with proper cable lay you will still be able to fit another internal 2.5 SATA disk or external 2.5 USB 3.0 disk.

silent-backup-mobo-case-blue.jpg

I attached Akyga AK-CA-57 cable to this USB 3.0 connector on the motherboard.

silent-backup-mobo-case-usb.jpg

Case with Maxtor M3 4TB disk. The disk placement required little modifications.

silent-backup-mobo-case-blue-disk.jpg

I created custom disk holders using steel plates I got from window mosquito net set for my home but you should be able to get something similar in any hardware shop. I modified them a little with pliers.

silent-backup-handles

I also ‘silenced’ the disk vibrations with felt stickers.

silent-backup-silence.jpg

The silenced disk in the Supermicro SC101i case.

silent-backup-mobo-case-blue-disk-silence.jpg

Ancestor

Before this setup I used Raspberry Pi 2B with external Western Digital 2TB 2.5 USB 3.0 disk but the storage space requirements become larger so I needed to increase that. It was of course with GELI encryption and ZFS with enabled LZ4 compression on top. The four humble ARM32 cores and soldered 1GB of RAM was able to squeeze whooping 5MB/s read/write experience from this ZFS/GELI setup but that was not hurting me as I used rsync(1) for differential backups and the Internet connection to that box was limited to about 1.5MB/s. I would still use that setup but it just won’t boot with that larger Maxtor M3 4TB disk because it requires more power and I already used stronger 5V 3.1A charger then 5V 2.0A suggested by vendor. Even the safe_mode_gpio=4 and max_usb_current=1 options at /boot/msdos/config.txt did not help.

Cost

The complete setup price tops at $220 total. Here are the parts used.

PRICE  COMPONENT
  $59  CPU/Motherboard ASRock N3150B-ITX Mini-ITX
  $14  RAM Crucial 4GB DDR3L 1.35V
  $13  PSU 12V 7.5A 90W Pico (internal)
   $2  PSU 12V 2.5A 30W Leader Electronics (external)
  $29  Supermicro SC101i (used)
   $3  Akyga AK-CA-57 USB 3.0 Cable
   $3  SanDisk Fit 16GB USB 2.0 Drive (system)
  $95  Maxtor M3 4TB 2.5 USB 3.0 Drive (data)
 $220  TOTAL

PSU

In earlier Silent Fanless FreeBSD Desktop/Server article I used quite large 90W PSU from FSP Group. From the PSUs that I owned only ThinkPad W520/W530 bricks can compete in size with this beast. As this motherboard will use very little power (details lower) it will require a lot smaller PSU. As the FSP Group PSU has IEC C14 slot it also requires additional IEC C13 power cable which makes it even bigger solution. The new 12V 2.5A 30W is very compact and also costs fraction of the 90W FSP Group gojira.

New Leader Electronics PSU label.

silent-backup-psu-ext-label.jpg

Below you can see the comparison for yourself.

silent-backup-psu-compare

I also got cheaper and less powerful Pico PSU which now tops as 12V 7.5A 90W power.

silent-backup-psu-pico-12V-90W.jpg

Power Consumption

This is where it gets really interesting. I measured the power consumption with power meter.

silent-backup-power-meter.jpg

Idle

When this box is booted without any media attached it uses only 7.5W of power idling. While the system was idle with SanDisk 16GB USB 2.0 drive (on which FreeBSD was installed) it used about 8.0W of power. When booted with Maxtor M3 4TB disk inside and SanDisk 16GB USB 2.0 drive attached it run idle at about 8.5W of power.

Load

As I do not need full CPU speed I limited the CPU speed in powerd(8) options to 1.2Ghz. With this limit set the fully loaded system with all 4 cores busy at 100% and two dd(8) processes for read both boot SanDisk 16GB drive and Maxtor M3 4TB disk and with GELI enabled ZFS pool doing scrub operation in progress and additional two find(1) processes for both disks it would not pass the 13.9W barrier. Without CPU limitation (that means Intel Turbo Boost enabled) the system used 16.0W of power at most.

Summary of power usage for this box.

 POWER  TYPE  CONFIGURATION
 7.5 W  IDLE  System
 8.0 W  IDLE  System + SanDisk 16GB drive
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive
13.9 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
16.0 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive

For comparision the Raspberry Pi 2B with 16GB MicroSD card attached used only 1.5W but we all know how slow it is. When used with Western Digital 2TB 2.5 USB 3.0 drive it used about 2.2W at idle state.

Configuration for Low Power Consumption

Below are FreeBSD configuration files used in this box to lower the power consumption.

The /etc/sysctl.conf file.

# ANNOYING THINGS
  vfs.usermount=1
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# LIMIT ZFS ARC EFFICIENTLY
  kern.maxvnodes=32768

# ALLOW UPGRADES IN JAILS
  security.jail.chflags_allowed=1

# ALLOW RAW SOCKETS IN JAILS
  security.jail.param.allow.raw_sockets=1
  security.jail.allow_raw_sockets=1

# RANDOM PID
  kern.randompid=12345

# PERFORMANCE/ALL SHARED MEMORY SEGMENTS WILL BE MAPPED TO UNPAGEABLE RAM 
  kern.ipc.shm_use_phys=1

# MEMORY OVERCOMMIT SEE tuning(7)
  vm.overcommit=2

# NETWORK/DO NOT SEND RST ON SEGMENTS TO CLOSED PORTS
  net.inet.tcp.blackhole=2

# NETWORK/DO NOT SEND PORT UNREACHABLES FOR REFUSED CONNECTS
  net.inet.udp.blackhole=1

# NETWORK/ENABLE SCTP BLACKHOLING blackhole(4) FOR MORE DETAILS
  net.inet.sctp.blackhole=1

# NETWORK/MAX SIZE OF AUTOMATIC RECEIVE BUFFER (2097152) [4x]
  net.inet.tcp.recvbuf_max=8388608

# NETWORK/MAX SIZE OF AUTOMATIC SEND BUFFER (2097152) [4x]
  net.inet.tcp.sendbuf_max=8388608

# NETWORK/MAXIMUM SOCKET BUFFER SIZE (5242880) [3.2x]
  kern.ipc.maxsockbuf=16777216

# NETWORK/MAXIMUM LISTEN SOCKET PENDING CONNECTION ACCEPT QUEUE SIZE (128) [8x]
  kern.ipc.soacceptqueue=1024

# NETWORK/DEFAULT tcp MAXIMUM SEGMENT SIZE (536) [2.7x]
  net.inet.tcp.mssdflt=1460

# NETWORK/MINIMUM TCP MAXIMUM SEGMENT SIZE (216) [6x]
  net.inet.tcp.minmss=1300

# NETWORK/LIMIT ON SYN/ACK RETRANSMISSIONS (3)
  net.inet.tcp.syncache.rexmtlimit=0

# NETWORK/USE TCP SYN COOKIES IF THE SYNCACHE OVERFLOWS (1)
  net.inet.tcp.syncookies=0

# NETWORK/ENABLE TCP SEGMENTATION OFFLOAD (1)
  net.inet.tcp.tso=0

# NETWORK/ENABLE IP OPTIONS PROCESSING ([LS]SRR, RR, TS) (1)
  net.inet.ip.process_options=0

# NETWORK/ASSIGN RANDOM ip_id VALUES (0)
  net.inet.ip.random_id=1

# NETWORK/ENABLE SENDING IP REDIRECTS (1)
  net.inet.ip.redirect=0

# NETWORK/IGNORE ICMP REDIRECTS (0)
  net.inet.icmp.drop_redirect=1

# NETWORK/ASSUME SO_KEEPALIVE ON ALL TCP CONNECTIONS (1)
  net.inet.tcp.always_keepalive=0

# NETWORK/DROP TCP PACKETS WITH SYN+FIN SET (0)
  net.inet.tcp.drop_synfin=1

# NETWORK/RECYCLE CLOSED FIN_WAIT_2 CONNECTIONS FASTER (0)
  net.inet.tcp.fast_finwait2_recycle=1

# NETWORK/CERTAIN ICMP UNREACHABLE MESSAGES MAY ABORT CONNECTIONS IN SYN_SENT (1)
  net.inet.tcp.icmp_may_rst=0

# NETWORK/MAXIMUM SEGMENT LIFETIME (30000) [0.27x]
  net.inet.tcp.msl=8192

# NETWORK/ENABLE PATH MTU DISCOVERY (1)
  net.inet.tcp.path_mtu_discovery=0

# NETWORK/EXPIRE TIME OF TCP HOSTCACHE ENTRIES (3600) [2x]
  net.inet.tcp.hostcache.expire=7200

# NETWORK/TIME BEFORE DELAYED ACK IS SENT (100) [0.2x]
  net.inet.tcp.delacktime=20

The /boot/loader.conf file.

# BOOT OPTIONS
  autoboot_delay=1
  boot_mute=YES

# MODULES FOR BOOT
  zfs_load=YES

# DISABLE HYPER THREADING
  machdep.hyperthreading_allowed=0

# REDUCE NUMBER OF SOUND GENERATED INTERRUPTS
  hw.snd.latency=7

# RACCT/RCTL RESOURCE LIMITS
  kern.racct.enable=1

# PIPE KVA LIMIT | 320 MB
  kern.ipc.maxpipekva=335544320

# NUMBER OF SEGMENTS PER PROCESS
  kern.ipc.shmseg=1024

# LARGE PAGE MAPPINGS
  vm.pmap.pg_ps_enabled=1

# SHARED MEMORY
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

# ZFS TUNING
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=32M
  vfs.zfs.arc_max=128M
  vfs.zfs.txg.timeout=1

# NETWORK MAX SEND QUEUE SIZE
  net.link.ifqmaxlen=2048

# POWER OFF DEVICES WITHOUT ATTACHED DRIVER
  hw.pci.do_power_nodriver=3

# AHCI POWER MANAGEMENT FOR EVERY USED CHANNEL (ahcich 0-7)
  hint.ahcich.0.pm_level=5
  hint.ahcich.1.pm_level=5
  hint.ahcich.2.pm_level=5
  hint.ahcich.3.pm_level=5
  hint.ahcich.4.pm_level=5
  hint.ahcich.5.pm_level=5
  hint.ahcich.6.pm_level=5
  hint.ahcich.7.pm_level=5

# GELI THREADS
  kern.geom.eli.threads=2
  kern.geom.eli.batch=1

The /etc/rc.conf file.

# NETWORK
  hostname=offsite.local
  background_dhclient=YES
  extra_netfs_types=NFS
  defaultroute_delay=3
  defaultroute_carrier_delay=3

# MODULES/COMMON/BASE
  kld_list="${kld_list} aesni geom_eli"
  kld_list="${kld_list} fuse coretemp sem cpuctl ichsmb cc_htcp"
  kld_list="${kld_list} libiconv cd9660_iconv msdosfs_iconv udf_iconv"

# POWER
  performance_cx_lowest=C1
  economy_cx_lowest=Cmax
  powerd_enable=YES
  powerd_flags="-n adaptive -a hiadaptive -b adaptive -m 400 -M 1200"

# DAEMONS | yes
  zfs_enable=YES
  nfs_client_enable=YES
  syslogd_flags='-s -s'
  sshd_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# FS
  fsck_y_enable=YES
  clear_tmp_enable=YES
  clear_tmp_X=YES
  growfs_enable=YES

# OTHER
  keyrate=fast
  font8x14=vgarom-8x14
  virecover_enable=NO
  update_motd=NO
  devfs_system_ruleset=desktop
  hostid_enable=NO

USB Boot Drive

I was not sure if I should use USB 2.0 drive or USB 3.0 drive for FreeBSD system so I got both versions from SanDisk and tested their performance with pv(1) and diskinfo(8) tools. The pv(1) utility had options enabled shown below and for diskinfo(8) the -c and -i parameters were used.

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

The dmesg(8) information for the SanDisk Fit USB 2.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530001100609104091
da0: 40.000MB/s transfers
da0: 15060MB (30842880 512 byte sectors)
da0: quirks=0x2

The dmesg(8) information for the SanDisk Fit USB 3.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530 001070202100093
da0: 40.000MB/s transfers
da0: 14663MB (30031250 512 byte sectors)
da0: quirks=0x2

There is also noticeable size difference as the USB 2.0 version has additional 400 MB of space!

By the way … the SanDisk Fit USB 3.0 16GB came with this sticker inside the box – a serial number for the RescuePRO Deluxe software – which I will never use. Not because its bad or something but because I have no such needs. You may take it … of course unless someone else did not took it already ๐Ÿ™‚

silent-backup-license.jpg

Below are the results of the benchmarks, I tested them in both USB 2.0 and USB 3.0 ports.


                   DRIVE  USB  pv/READ  pv/WRITE  diskinfo/OVERHEAD  diskinfo/IOPS
SanDisk Fit USB 2.0 16GB  2.0   29MB/s     5MB/s   0.712msec/sector           2521
SanDisk Fit USB 2.0 16GB  3.0   33MB/s     5MB/s   0.799msec/sector           2441
SanDisk Fit USB 3.0 16GB  2.0   35MB/s     9MB/s   0.618msec/sector           1920
SanDisk Fit USB 3.0 16GB  3.0   91MB/s    11MB/s   0.567msec/sector           1588

What is also interesting is that while USB 2.0 version has lower throughput it has more IOPS then the newer USB 3.0 incarnation of the SanDisk Fit drive. I also did other more real life test. I checked how long would it take to boot FreeBSD system installed on each of them from the loader(8) screen to the login: prompt. The difference is 5 seconds. Details are shown below.

 TIME  DRIVE
  28s  SanDisk Fit USB 3.0 16GB
  33s  SanDisk Fit USB 2.0 16GB

With such small ~15% difference I will use SanDisk Fit USB 2.0 16GB as it sticks out little less outside from the slot as shown below.

silent-backup-usb-drives.jpg

Cloud Storage Prices Comparison

The Tarsnap“online backups for the truly paranoid” – costs $0.25/GB/month. The price in Tarsnap is for data transmitted after deduplication and compression but that does not change much here. For my data the compressratio property from ZFS dataset is at 3% (1.03). When I estimate deduplication savings with zdb -S pool command I get additional 1% of the savings (1.01). Lets assume that with both deduplication and compression it would take 5% (1.05) savings. That would lower the Tarsnap price to $0.2375/GB/month.

The Backblaze B2 Cloud Storage – storage costs $0.005/GB/month.

Our single 4TB disk solution costs $230 for lets say 3 years. You can expect disk failure after that period but it may serve you as well for another 3 years. Now as we know the cloud storage prices lets calculate price for 4TB data stored for 3 years in these cloud services.

Self Solution Electricity Cost

We also need to calculate how much energy our build solution would consume. Currently 1kWh of power costs about $0.20 in Europe/Poland (rounded up). This means that running computer with 1000W power usage for 1 hour would cost you $0.20 on electricity bill. Our solution idles at 8.5W and uses 13.9W when fully loaded. It will be idle for most of the time so I will assume that it will use 10W on average here. That would cost us $0.002 for 10W device running for 1 hour.

Below you will also find calculations for 1 day (24x multiplier), 1 year (another 365.25x multiplier) and 3 years (another 3x multiplier).

  COST  TIME
$0.002  1 HOUR
$0.048  1 DAY
$17.53  1 YEAR
$52.60  3 YEARS

Our total 3 years electricity cost is $282.60 for building and then running the system non-stop. We can also implement features like Wake On LAN to limit that power usage even more for example.

Here are these cloud storage service providers prices.


PROVIDER     PRICE  DATA  TIME
Tarsnap    $0.2375   1GB  1 Month
Backblaze  $0.0050   1GB  1 Month

The price for 1 month of keeping 4TB of data on these providers looks as follows.


PROVIDER   PRICE  DATA  TIME
Tarsnap     $973   4TB  1 Month
Backblaze    $20   4TB  1 Month

For just 1 month the Tarsnap is 4 TIMES more expensive the keeping the backup on your self computer with 4TB disk. The Backblaze service is at 1/10 cost which is still reasonable.

Lets compare prices for 3 years of 4TB storage.


PROVIDER    PRICE  DATA  TIME
Tarsnap    $35021   4TB  3 Years
Backblaze    $737   4TB  3 Years

After 3 years the Backblaze solutions is about 2.5 TIMES more expensive then our personal setup, but if you really do not want to create your solution the difference for 3 years is not that big. The Tarsnap is out of bounds here being more then 120 TIMES more expensive then self hosted solution. Remember that I also did not included costs for transferring the data into or from the cloud storage. That would make cloud storage costs even bigger depending how often you would want to pull/push your data.

EOF

IBM TSM (Spectrum Protect) on Veritas Cluster Server

Until today I mostly shared articles about free and open systems. Now its time to share so called enterprise experience ๐Ÿ™‚ Not so long ago I made a IBM TSM instance as highly available service on Symantec Veritas Cluster Server.

ibm-tsm-logo.png

If you prefer to use open and free backup solution then check Bareos Backup Server on FreeBSD article.

The IBM TSM (Tivoli Storage Manager) has been rebranded by IBM into IBM Spectrum Protect and in the similar period of time Symantec moved Veritas Cluster Server info InfoScale Availability while creating separate/dedicated Veritas company for this purpose.

The instructions I want to share today are for sure the same for latest versions of Veritas Cluster Server and its later InfoScale Availability incarnations and latest IBM Spectrum Protect 8.1 family introduction was mostly related to rebranding/cleaning of the whole Spectrum Protect/TSM modules and additions, so they all will have common 8.1 label. As these instructions were made for IBM TSM (Spectrum Protect) 7.1.6 version they should still be very similar for current versions.

This highly available IBM TSM instance is part of the whole Backup Consolidation project which uses two physical servers to server both this IBM TSM service and Dell/EMC Networker backup server. When everything is OK then one of the nodes is dedicated to IBM TSM and the other one is used by Dell/EMC Networker, so all physical resources are well saturated and we do not ‘waste’ whole node to wait for 99% of the time empty for the first node to crash. Of course if first node misbehaves or has a hardware failure, then both IBM TSM and Dell/EMC Networker run nicely on single node. It is also very convenient for various maintenance tasks, to be able to switch all services to other node and and work in peace on the first one, but I do not have to tell you that. The third and last service is shared between these two Oracle RMAN Catalog for the Oracle databases metadata information – also for backup/restore purposes.

I will not write here instructions to install the operating system (we use amd64 RHEL 6.x here) or to setup the Veritas Cluster Server as I installed it earlier and its quite simple to set it up. These instructions focus on creating IBM TSM highly available service along using/allocating the resources from the IBM Storwize V5030 storage array where 400 GB SSD disks are dedicated for IBM TSM DB2 database instance and 1.8 TB 10K SAS disks are dedicated for DRAID groups that will be serving space for IBM TSM storage pools implemented in latest IBM TSM container pools with deduplication and compression enabled. The head of IBM Storwize V5030 storage array is shown below.

ibm-tsm-v5030-photo.jpg

Each node is IBM System x3650 M4 server with two dual-port 8Gb FC cards and one dual-port 10GE cards … along with builtin 1GE cards for Veritas Cluster Server heartbeats. Each has 192 GB RAM and dual 6-core CPUs @ 3.5 GHz each which translates to 12 physical cores or 24 HTT threads per node. The three internal SSD drives are used for the system only in RAID1 + SPARE configuration. All clustered resources are from IBM Storwize V5030 FC/SAN storage array. The operating system installed on these nodes is amd64 RHEL 6.x and the Veritas Cluster Server is at 6.2.x version. The IBM System x3650 M4 server is shown below.

ibm-tsm-x3650-m4.jpg

All of the setting/tuning/decisions were made based on the IBM TSM documentation and great IBM Spectrum Protect Blueprints resources from the valuable IBM developerWorks wiki.

Storage Array Setup

First we need to create MDISKS. We used DRAID with double parity protection + spare for each MDISK with 17 SAS 1.8 TB 10K disks each. That gives 14 disks for data 2 for parity and 1 spare from which all provide I/O thanks to DRAID setup. We have three such MDISKs with ~21.7 TB each for the total 65.1 TB for IBM TSM containers. Of course all these 3 ‘pool’ MDISKs are in one Storage Group. The LUNs for the IBM TSM DB2 database were 5 SSD 400 GB disks setup in a DRAID disk with 1 parity and 1 spare disk. This gives 3 disks for data 1 for parity and 1 for spare space. This gives about 1.1 TB for the IBM TSM DB2 database.

Here are LUNs created from these MDISKs.

ibm-tsm-v5030.png

I needed to remove some names of course ๐Ÿ™‚

LUNs Initialization

Veritas Service Cluster needs to have storage prepared with disk groups which are similar in concept (but more powerful) then LVM. Below are instructions to first detect and then initialize these LUNs from IBM Storwize V5030 storage array. I marked them in blue for more clarity.

[root@300 ~]# haconf -makerw
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:none       -            -            online invalid
storwizev70000_00001b auto:none       -            -            online invalid
storwizev70000_00001c auto:none       -            -            online invalid
storwizev70000_00001d auto:none       -            -            online invalid
storwizev70000_00001e auto:none       -            -            online invalid
storwizev70000_00001f auto:none       -            -            online invalid
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:none       -            -            online invalid
storwizev70000_000013 auto:none       -            -            online invalid
storwizev70000_000014 auto:none       -            -            online invalid
storwizev70000_000015 auto:none       -            -            online invalid
storwizev70000_000016 auto:none       -            -            online invalid
storwizev70000_000017 auto:none       -            -            online invalid
storwizev70000_000018 auto:none       -            -            online invalid
storwizev70000_000019 auto:none       -            -            online invalid
storwizev70000_000020 auto:none       -            -            online invalid
[root@300 ~]# vxdisksetup -i storwizev70000_00001a
[root@300 ~]# vxdisksetup -i storwizev70000_00001b
[root@300 ~]# vxdisksetup -i storwizev70000_00001c
[root@300 ~]# vxdisksetup -i storwizev70000_00001d
[root@300 ~]# vxdisksetup -i storwizev70000_00001e
[root@300 ~]# vxdisksetup -i storwizev70000_00001f
[root@300 ~]# vxdisksetup -i storwizev70000_000012
[root@300 ~]# vxdisksetup -i storwizev70000_000013
[root@300 ~]# vxdisksetup -i storwizev70000_000014
[root@300 ~]# vxdisksetup -i storwizev70000_000015
[root@300 ~]# vxdisksetup -i storwizev70000_000016
[root@300 ~]# vxdisksetup -i storwizev70000_000017
[root@300 ~]# vxdisksetup -i storwizev70000_000018
[root@300 ~]# vxdisksetup -i storwizev70000_000019
[root@300 ~]# vxdisksetup -i storwizev70000_000020
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    -            -            online
storwizev70000_00001b auto:cdsdisk    -            -            online
storwizev70000_00001c auto:cdsdisk    -            -            online
storwizev70000_00001d auto:cdsdisk    -            -            online
storwizev70000_00001e auto:cdsdisk    -            -            online
storwizev70000_00001f auto:cdsdisk    -            -            online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    -            -            online
storwizev70000_000013 auto:cdsdisk    -            -            online
storwizev70000_000014 auto:cdsdisk    -            -            online
storwizev70000_000015 auto:cdsdisk    -            -            online
storwizev70000_000016 auto:cdsdisk    -            -            online
storwizev70000_000017 auto:cdsdisk    -            -            online
storwizev70000_000018 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000020 auto:cdsdisk    -            -            online
[root@300 ~]# vxdg init TSM0_dg \
                stgFC_020=storwizev70000_000020 \
                stgFC_012=storwizev70000_000012 \
                stgFC_016=storwizev70000_000016 \
                stgFC_013=storwizev70000_000013 \
                stgFC_014=storwizev70000_000014 \
                stgFC_015=storwizev70000_000015 \
                stgFC_017=storwizev70000_000017 \
                stgFC_018=storwizev70000_000018 \
                stgFC_019=storwizev70000_000019 \
                stgFC_01A=storwizev70000_00001a \
                stgFC_01B=storwizev70000_00001b \
                stgFC_01C=storwizev70000_00001c \
                stgFC_01D=storwizev70000_00001d \
                stgFC_01E=storwizev70000_00001e \
                stgFC_01F=storwizev70000_00001f
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    stgFC_01A    TSM0_dg      online
storwizev70000_00001b auto:cdsdisk    stgFC_01B    TSM0_dg      online
storwizev70000_00001c auto:cdsdisk    stgFC_01C    TSM0_dg      online
storwizev70000_00001d auto:cdsdisk    stgFC_01D    TSM0_dg      online
storwizev70000_00001e auto:cdsdisk    stgFC_01E    TSM0_dg      online
storwizev70000_00001f auto:cdsdisk    stgFC_01F    TSM0_dg      online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    stgFC_012    TSM0_dg      online
storwizev70000_000013 auto:cdsdisk    stgFC_013    TSM0_dg      online
storwizev70000_000014 auto:cdsdisk    stgFC_014    TSM0_dg      online
storwizev70000_000015 auto:cdsdisk    stgFC_015    TSM0_dg      online
storwizev70000_000016 auto:cdsdisk    stgFC_016    TSM0_dg      online
storwizev70000_000017 auto:cdsdisk    stgFC_017    TSM0_dg      online
storwizev70000_000018 auto:cdsdisk    stgFC_018    TSM0_dg      online
storwizev70000_000019 auto:cdsdisk    stgFC_019    TSM0_dg      online
storwizev70000_000020 auto:cdsdisk    stgFC_020    TSM0_dg      online
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_instance     maxsize=32G   stgFC_020
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_active_log   maxsize=128G  stgFC_012
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_archive_log  maxsize=384G  stgFC_016
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_01        maxsize=300G  stgFC_013
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_02        maxsize=300G  stgFC_014
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_03        maxsize=300G  stgFC_015
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_01 maxsize=900G  stgFC_017
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_02 maxsize=900G  stgFC_018
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_03 maxsize=900G  stgFC_019
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_01     maxsize=6700G stgFC_01A
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_02     maxsize=6700G stgFC_01B
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_03     maxsize=6700G stgFC_01C
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_04     maxsize=6700G stgFC_01D
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_05     maxsize=6700G stgFC_01E
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_06     maxsize=6700G stgFC_01F
[root@300 ~]# vxprint -u h | grep ^sd | column -t
sd  stgFC_00B-01  NSR_vol_index-02          ENABLED  399.95g  0.00  -  -  -
sd  stgFC_00C-01  NSR_vol_media-02          ENABLED  9.96g    0.00  -  -  -
sd  stgFC_00D-01  NSR_vol_nsr-02            ENABLED  79.96g   0.00  -  -  -
sd  stgFC_00E-01  NSR_vol_res-02            ENABLED  9.96g    0.00  -  -  -
sd  stgFC_012-01  TSM0_vol_active_log-01    ENABLED  127.96g  0.00  -  -  -
sd  stgFC_016-01  TSM0_vol_archive_log-01   ENABLED  383.95g  0.00  -  -  -
sd  stgFC_017-01  TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_018-01  TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_019-01  TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_013-01  TSM0_vol_db_01-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_014-01  TSM0_vol_db_02-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_015-01  TSM0_vol_db_03-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_020-01  TSM0_vol_instance-01      ENABLED  31.96g   0.00  -  -  -
sd  stgFC_01A-01  TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01B-01  TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01C-01  TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01D-01  TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01E-01  TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01F-01  TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00  -  -  -
[root@300 ~]# vxprint -u h -g TSM0_dg | column -t
TY  NAME                      ASSOC                     KSTATE   LENGTH   PLOFFS  STATE   TUTIL0  PUTIL0
dg  TSM0_dg                   TSM0_dg                   -        -        -       -       -       -
dm  stgFC_01A                 storwizev70000_00001a     -        6.54t    -       -       -       -
dm  stgFC_01B                 storwizev70000_00001b     -        6.54t    -       -       -       -
dm  stgFC_01C                 storwizev70000_00001c     -        6.54t    -       -       -       -
dm  stgFC_01D                 storwizev70000_00001d     -        6.54t    -       -       -       -
dm  stgFC_01E                 storwizev70000_00001e     -        6.54t    -       -       -       -
dm  stgFC_01F                 storwizev70000_00001f     -        6.54t    -       -       -       -
dm  stgFC_012                 storwizev70000_000012     -        127.96g  -       -       -       -
dm  stgFC_013                 storwizev70000_000013     -        299.95g  -       -       -       -
dm  stgFC_014                 storwizev70000_000014     -        299.95g  -       -       -       -
dm  stgFC_015                 storwizev70000_000015     -        299.95g  -       -       -       -
dm  stgFC_016                 storwizev70000_000016     -        383.95g  -       -       -       -
dm  stgFC_017                 storwizev70000_000017     -        899.93g  -       -       -       -
dm  stgFC_018                 storwizev70000_000018     -        899.93g  -       -       -       -
dm  stgFC_019                 storwizev70000_000019     -        899.93g  -       -       -       -
dm  stgFC_020                 storwizev70000_000020     -        31.96g   -       -       -       -

v   TSM0_vol_active_log       fsgen                     ENABLED  127.96g  -       ACTIVE  -       -
pl  TSM0_vol_active_log-01    TSM0_vol_active_log       ENABLED  127.96g  -       ACTIVE  -       -
sd  stgFC_012-01              TSM0_vol_active_log-01    ENABLED  127.96g  0.00    -       -       -

v   TSM0_vol_archive_log      fsgen                     ENABLED  383.95g  -       ACTIVE  -       -
pl  TSM0_vol_archive_log-01   TSM0_vol_archive_log      ENABLED  383.95g  -       ACTIVE  -       -
sd  stgFC_016-01              TSM0_vol_archive_log-01   ENABLED  383.95g  0.00    -       -       -

v   TSM0_vol_db_backup_01     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_01-01  TSM0_vol_db_backup_01     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_017-01              TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_02     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_02-01  TSM0_vol_db_backup_02     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_018-01              TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_03     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_03-01  TSM0_vol_db_backup_03     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_019-01              TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_01            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_01-01         TSM0_vol_db_01            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_013-01              TSM0_vol_db_01-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_02            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_02-01         TSM0_vol_db_02            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_014-01              TSM0_vol_db_02-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_03            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_03-01         TSM0_vol_db_03            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_015-01              TSM0_vol_db_03-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_instance         fsgen                     ENABLED  31.96g   -       ACTIVE  -       -
pl  TSM0_vol_instance-01      TSM0_vol_instance         ENABLED  31.96g   -       ACTIVE  -       -
sd  stgFC_020-01              TSM0_vol_instance-01      ENABLED  31.96g   0.00    -       -       -

v   TSM0_vol_pool0_01         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_01-01      TSM0_vol_pool0_01         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01A-01              TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_02         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_02-01      TSM0_vol_pool0_02         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01B-01              TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_03         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_03-01      TSM0_vol_pool0_03         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01C-01              TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_04         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_04-01      TSM0_vol_pool0_04         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01D-01              TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_05         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_05-01      TSM0_vol_pool0_05         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01E-01              TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_06         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_06-01      TSM0_vol_pool0_06         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01F-01              TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00    -       -       -
[root@300 ~]# vxinfo -p -g TSM0_dg | column -t
vol   TSM0_vol_instance         fsgen   Started
plex  TSM0_vol_instance-01      ACTIVE
vol   TSM0_vol_active_log       fsgen   Started
plex  TSM0_vol_active_log-01    ACTIVE
vol   TSM0_vol_archive_log      fsgen   Started
plex  TSM0_vol_archive_log-01   ACTIVE
vol   TSM0_vol_db_01            fsgen   Started
plex  TSM0_vol_db_01-01         ACTIVE
vol   TSM0_vol_db_02            fsgen   Started
plex  TSM0_vol_db_02-01         ACTIVE
vol   TSM0_vol_db_03            fsgen   Started
plex  TSM0_vol_db_03-01         ACTIVE
vol   TSM0_vol_db_backup_01     fsgen   Started
plex  TSM0_vol_db_backup_01-01  ACTIVE
vol   TSM0_vol_db_backup_02     fsgen   Started
plex  TSM0_vol_db_backup_02-01  ACTIVE
vol   TSM0_vol_db_backup_03     fsgen   Started
plex  TSM0_vol_db_backup_03-01  ACTIVE
vol   TSM0_vol_pool0_01         fsgen   Started
plex  TSM0_vol_pool0_01-01      ACTIVE
vol   TSM0_vol_pool0_02         fsgen   Started
plex  TSM0_vol_pool0_02-01      ACTIVE
vol   TSM0_vol_pool0_03         fsgen   Started
plex  TSM0_vol_pool0_03-01      ACTIVE
vol   TSM0_vol_pool0_04         fsgen   Started
plex  TSM0_vol_pool0_04-01      ACTIVE
vol   TSM0_vol_pool0_05         fsgen   Started
plex  TSM0_vol_pool0_05-01      ACTIVE
vol   TSM0_vol_pool0_06         fsgen   Started
plex  TSM0_vol_pool0_06-01      ACTIVE
[root@300 ~]# find /dev/vx/dsk -name TSM0_\*
/dev/vx/dsk/TSM0_dg
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_06     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_05     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_04     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_03     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_02     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_01     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_03        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_02        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_01        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_archive_log  &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_active_log   &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_instance     &

[root@300 ~]# haconf -dump -makero

Veritas Cluster Server Group

Now as we have LUNs initialized into Disk Group we may now create the cluster service.

[root@300 ~]# haconf -makerw
[root@300 ~]# hagrp -add TSM0_site
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
[root@300 ~]# hagrp -modify TSM0_site SystemList 300 0 301 1
[root@300 ~]# hagrp -modify TSM0_site AutoStartList 300 301
[root@300 ~]# hagrp -modify TSM0_site Parallel 0
[root@300 ~]# hares -add    TSM0_nic_bond0 NIC TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_nic_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_nic_bond0 PingOptimize 1
[root@300 ~]# hares -modify TSM0_nic_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_nic_bond0 Enabled 1
[root@300 ~]# hares -probe  TSM0_nic_bond0 -sys 301
[root@300 ~]# hares -add    TSM0_ip_bond0 IP TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_ip_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_ip_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_ip_bond0 Address 10.20.30.44
[root@300 ~]# hares -modify TSM0_ip_bond0 NetMask 255.255.255.0
[root@300 ~]# hares -modify TSM0_ip_bond0 Enabled 1
[root@300 ~]# hares -link   TSM0_ip_bond0 TSM0_nic_bond0
[root@300 ~]# hares -add    TSM0_dg DiskGroup TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_dg Critical 1
[root@300 ~]# hares -modify TSM0_dg DiskGroup TSM0_dg
[root@300 ~]# hares -modify TSM0_dg Enabled 1
[root@300 ~]# hares -probe  TSM0_dg -sys 301
[root@300 ~]# mkdir /tsm0
[root@301 ~]# mkdir /tsm0

I did not wanted to type all these over and over again so I generated these commands as shown below.

[LOCAL] % cat > LIST << __EOF
stgFC_020    32  /tsm0                         TSM0_vol_instance      TSM0_mnt_instance
stgFC_012   128  /tsm0/active_log              TSM0_vol_active_log    TSM0_mnt_active_log
stgFC_016   384  /tsm0/archive_log             TSM0_vol_archive_log   TSM0_mnt_archive_log
stgFC_013   300  /tsm0/db/db_01                TSM0_vol_db_01         TSM0_mnt_db_01
stgFC_014   300  /tsm0/db/db_02                TSM0_vol_db_02         TSM0_mnt_db_02
stgFC_015   300  /tsm0/db/db_03                TSM0_vol_db_03         TSM0_mnt_db_03
stgFC_017   900  /tsm0/db_backup/db_backup_01  TSM0_vol_db_backup_01  TSM0_mnt_db_backup_01
stgFC_018   900  /tsm0/db_backup/db_backup_02  TSM0_vol_db_backup_02  TSM0_mnt_db_backup_02
stgFC_019   900  /tsm0/db_backup/db_backup_03  TSM0_vol_db_backup_03  TSM0_mnt_db_backup_03
stgFC_01A  6700  /tsm0/pool0/pool0_01          TSM0_vol_pool0_01      TSM0_mnt_pool0_01
stgFC_01B  6700  /tsm0/pool0/pool0_02          TSM0_vol_pool0_02      TSM0_mnt_pool0_02
stgFC_01C  6700  /tsm0/pool0/pool0_03          TSM0_vol_pool0_03      TSM0_mnt_pool0_03
stgFC_01D  6700  /tsm0/pool0/pool0_04          TSM0_vol_pool0_04      TSM0_mnt_pool0_04
stgFC_01E  6700  /tsm0/pool0/pool0_05          TSM0_vol_pool0_05      TSM0_mnt_pool0_05
stgFC_01F  6700  /tsm0/pool0/pool0_06          TSM0_vol_pool0_06      TSM0_mnt_pool0_06
__EOF
[LOCAL]# cat LIST \
  | while read STG SIZE MNTPOINT VOL MNTNAME
    do
      echo sleep 0.2; echo hares -add    ${MNTNAME} Mount TSM0_site
      echo sleep 0.2; echo hares -modify ${MNTNAME} Critical 1
      echo sleep 0.2; echo hares -modify ${MNTNAME} SnapUmount 0
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountPoint ${MNTPOINT}
      echo sleep 0.2; echo hares -modify ${MNTNAME} BlockDevice /dev/vx/dsk/TSM0_dg/${VOL}
      echo sleep 0.2; echo hares -modify ${MNTNAME} FSType vxfs
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountOpt largefiles
      echo sleep 0.2; echo hares -modify ${MNTNAME} FsckOpt %-y
      echo sleep 0.2; echo hares -modify ${MNTNAME} Enabled 1
      echo sleep 0.2; echo hares -probe  ${MNTNAME} -sys 301
      echo sleep 0.2; echo hares -link   ${MNTNAME} TSM0_dg
      echo
    done
[root@300 ~]# hares -add    TSM0_mnt_instance Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_instance Critical 1
[root@300 ~]# hares -modify TSM0_mnt_instance SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_instance MountPoint /tsm0
[root@300 ~]# hares -modify TSM0_mnt_instance BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# hares -modify TSM0_mnt_instance FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_instance MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_instance FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_instance Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_instance -sys 301
[root@300 ~]# hares -link   TSM0_mnt_instance TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_active_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_active_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_active_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_active_log MountPoint /tsm0/active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_active_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_active_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_active_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_active_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_active_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_archive_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_archive_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_archive_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountPoint /tsm0/archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_archive_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_archive_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_archive_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_archive_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountPoint /tsm0/db/db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountPoint /tsm0/db/db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountPoint /tsm0/db/db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountPoint /tsm0/db_backup/db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountPoint /tsm0/db_backup/db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountPoint /tsm0/db_backup/db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountPoint /tsm0/pool0/pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountPoint /tsm0/pool0/pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountPoint /tsm0/pool0/pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_04 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountPoint /tsm0/pool0/pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_04 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_04 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_05 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountPoint /tsm0/pool0/pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_05 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_05 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_06 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountPoint /tsm0/pool0/pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_06 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_06 TSM0_dg
[root@300 ~]# hares -state | grep TSM0 | grep _mnt_ | \
                while read I; do hares -display $I 2>&1 | grep -v ArgListValues | grep 'largefiles'; done | column -t
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
[root@300 ~]# hares -add    TSM0_server Application TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_server StartProgram   "/etc/init.d/tsm0 start"
[root@300 ~]# hares -modify TSM0_server StopProgram    "/etc/init.d/tsm0 stop"
[root@300 ~]# hares -modify TSM0_server MonitorProgram "/etc/init.d/tsm0 status"
[root@300 ~]# hares -modify TSM0_server Enabled 1
[root@300 ~]# hares -probe  TSM0_server -sys 301
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_active_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_archive_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_04
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_05
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_06
[root@300 ~]# hares -link   TSM0_server           TSM0_ip_bond0
[root@300 ~]# hares -link   TSM0_mnt_active_log   TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_archive_log  TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_01        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_02        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_03        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_01     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_02     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_03     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_04     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_05     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_06     TSM0_mnt_instance
[root@300 ~]# vxdg import TSM0_dg
[root@300 ~]# mount -t vxfs /dev/vx/dsk/TSM0_dg/TSM0_vol_instance /tsm0
[root@301 ~]# mkdir -p /tsm0/active_log
[root@301 ~]# mkdir -p /tsm0/archive_log
[root@300 ~]# mkdir -p /tsm0/db/db_01
[root@300 ~]# mkdir -p /tsm0/db/db_02
[root@300 ~]# mkdir -p /tsm0/db/db_03
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_01
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_02
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_01
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_02
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_04
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_05
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_06
[root@300 ~]# find /tsm0
/tsm0
/tsm0/lost+found
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# umount /tsm0
[root@300 ~]# vxdg deport TSM0_dg
[root@300 ~]# haconf -dump -makero
[root@300 ~]# grep TSM0_server /etc/VRTSvcs/conf/config/main.cf
        Application TSM0_server (
        TSM0_server requires TSM0_ip_bond0
        TSM0_server requires TSM0_mnt_active_log
        TSM0_server requires TSM0_mnt_archive_log
        TSM0_server requires TSM0_mnt_db_01
        TSM0_server requires TSM0_mnt_db_02
        TSM0_server requires TSM0_mnt_db_03
        TSM0_server requires TSM0_mnt_db_backup_01
        TSM0_server requires TSM0_mnt_db_backup_02
        TSM0_server requires TSM0_mnt_db_backup_03
        TSM0_server requires TSM0_mnt_instance
        TSM0_server requires TSM0_mnt_pool0_01
        TSM0_server requires TSM0_mnt_pool0_02
        TSM0_server requires TSM0_mnt_pool0_03
        TSM0_server requires TSM0_mnt_pool0_04
        TSM0_server requires TSM0_mnt_pool0_05
        TSM0_server requires TSM0_mnt_pool0_06
        //      Application TSM0_server

Local Per Node Resources

[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# cat /etc/fstab
/dev/mapper/vg_local-lv_root              /           ext3 rw,noatime,nodiratime      1 1
UUID=28d0988a-e6d7-48d8-b0e5-0f70f8eb681e /boot       ext3 defaults                   1 2
UUID=D401-661A                            /boot/efi   vfat umask=0077,shortname=winnt 0 0
/dev/vg_local/lv_swap                     swap        swap defaults                   0 0
/dev/vg_local/lv_tmp                      /tmp        ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_opt_tivoli               /opt/tivoli ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_home                     /home       ext3 rw,noatime,nodiratime      2 2

# VIRT
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Install IBM TSM Server Dependencies.

[root@ANY ~]# yum install numactl
[root@ANY ~]# yum install /usr/lib/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install /usr/lib64/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install xorg-x11-xauth xterm fontconfig libICE \
                          libX11-common libXau libXmu libSM libX11 libXt

System /etc/sysctl.conf parameters for both nodes.

[root@300 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0
[root@301 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0

Install IBM TSM Server

Connect to each node with SSH Forwarding enabled and install IBM TSM server.

[root@300 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./install.sh

… and the second node.

[root@301 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./install.sh

Options choosen during installation.

INSTALL | DESELECT 'Languages' and DESELECT 'Operations Center'
INSTALL | /opt/tivoli/IBM/IBMIMShared
INSTALL | /opt/tivoli/IBM/InstallationManager/eclipse
INSTALL | /opt/tivoli/tsm

Screenshots from the installation process.

ibm-tsm-install-01

ibm-tsm-install-02

ibm-tsm-install-03

ibm-tsm-install-04

ibm-tsm-install-05

ibm-tsm-install-06

Install IBM TSM Client

[root@300 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm
[root@301 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm

Nodes Configuration for IBM TSM Server

[root@300 ~]# useradd -u 1500 -m tsm0
[root@301 ~]# useradd -u 1500 -m tsm0
[root@300 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@301 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@300 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash

[root@301 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash
[root@300 ~]# tail -1 /etc/group
tsm0:x:1500:

[root@301 ~]# tail -1 /etc/group
tsm0:x:1500:
[root@300 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768

[root@301 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768
[root@300 ~]# :> /var/run/dsmserv_tsm0.pid
[root@301 ~]# :> /var/run/dsmserv_tsm0.pid
[root@300 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@301 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@300 ~]# hares -state | grep TSM
TSM0_dg               State                 300  OFFLINE
TSM0_dg               State                 301  OFFLINE
TSM0_ip_bond0         State                 300  OFFLINE
TSM0_ip_bond0         State                 301  OFFLINE
TSM0_mnt_active_log   State                 300  OFFLINE
TSM0_mnt_active_log   State                 301  OFFLINE
TSM0_mnt_archive_log  State                 300  OFFLINE
TSM0_mnt_archive_log  State                 301  OFFLINE
TSM0_mnt_db_01        State                 300  OFFLINE
TSM0_mnt_db_01        State                 301  OFFLINE
TSM0_mnt_db_02        State                 300  OFFLINE
TSM0_mnt_db_02        State                 301  OFFLINE
TSM0_mnt_db_03        State                 300  OFFLINE
TSM0_mnt_db_03        State                 301  OFFLINE
TSM0_mnt_db_backup_01 State                 300  OFFLINE
TSM0_mnt_db_backup_01 State                 301  OFFLINE
TSM0_mnt_db_backup_02 State                 300  OFFLINE
TSM0_mnt_db_backup_02 State                 301  OFFLINE
TSM0_mnt_db_backup_03 State                 300  OFFLINE
TSM0_mnt_db_backup_03 State                 301  OFFLINE
TSM0_mnt_instance     State                 300  OFFLINE
TSM0_mnt_instance     State                 301  OFFLINE
TSM0_mnt_pool0_01     State                 300  OFFLINE
TSM0_mnt_pool0_01     State                 301  OFFLINE
TSM0_mnt_pool0_02     State                 300  OFFLINE
TSM0_mnt_pool0_02     State                 301  OFFLINE
TSM0_mnt_pool0_03     State                 300  OFFLINE
TSM0_mnt_pool0_03     State                 301  OFFLINE
TSM0_mnt_pool0_04     State                 300  OFFLINE
TSM0_mnt_pool0_04     State                 301  OFFLINE
TSM0_mnt_pool0_05     State                 300  OFFLINE
TSM0_mnt_pool0_05     State                 301  OFFLINE
TSM0_mnt_pool0_06     State                 300  OFFLINE
TSM0_mnt_pool0_06     State                 301  OFFLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 300  OFFLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# hares -online TSM0_mnt_instance -sys $( hostname -s )
[root@300 ~]# hares -online TSM0_ip_bond0     -sys $( hostname -s )
[root@300 ~]# hares -state | grep TSM0 | grep 301 | grep mnt | grep -v instance | awk '{print $1}' \
                | while read I; do hares -online ${I} -sys $( hostname -s ); done
[root@300 ~]# hares -state | grep 301 | grep TSM0
TSM0_dg               State                 301  ONLINE
TSM0_ip_bond0         State                 301  ONLINE
TSM0_mnt_active_log   State                 301  ONLINE
TSM0_mnt_archive_log  State                 301  ONLINE
TSM0_mnt_db_01        State                 301  ONLINE
TSM0_mnt_db_02        State                 301  ONLINE
TSM0_mnt_db_03        State                 301  ONLINE
TSM0_mnt_db_backup_01 State                 301  ONLINE
TSM0_mnt_db_backup_02 State                 301  ONLINE
TSM0_mnt_db_backup_03 State                 301  ONLINE
TSM0_mnt_instance     State                 301  ONLINE
TSM0_mnt_pool0_01     State                 301  ONLINE
TSM0_mnt_pool0_02     State                 301  ONLINE
TSM0_mnt_pool0_03     State                 301  ONLINE
TSM0_mnt_pool0_04     State                 301  ONLINE
TSM0_mnt_pool0_05     State                 301  ONLINE
TSM0_mnt_pool0_06     State                 301  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# find /tsm0 | grep -v 'lost+found'
/tsm0
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# chown -R tsm0:tsm0 /tsm0

IBM TSM Server Configuration

Connect to one of the nodes with SSH Forwarding enabled.

[root@300 ~]# cd /opt/tivoli/tsm/server/bin
[root@300 /opt/tivoli/tsm/server/bin]# ./dsmicfgx
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Options choosen during configuration.

INSTALL | Instance user ID:
INSTALL |  ย ย tsm0
INSTALL |
INSTALL | Instance directory:
INSTALL |  ย ย /tsm0
INSTALL |
INSTALL | Database directories:
INSTALL |  ย ย /tsm0/db/db_01
INSTALL |   ย /tsm0/db/db_02
INSTALL |   ย /tsm0/db/db_03
INSTALL |
INSTALL | Active log directory:
INSTALL |  ย ย /tsm0/active_log
INSTALL |
INSTALL | Primary archive log directory:
INSTALL |  ย ย /tsm0/archive_log
INSTALL |
INSTALL | Instance autostart setting:
INSTALL |  ย ย Start automatically using the instance user ID

Screenshots from the configuration process.

ibm-tsm-configure-01

ibm-tsm-configure-02

ibm-tsm-configure-03

ibm-tsm-configure-04

ibm-tsm-configure-05

ibm-tsm-configure-06

ibm-tsm-configure-07

ibm-tsm-configure-08

ibm-tsm-configure-09

Log from the IBM TSM DB2 instance creation.

Creating the database manager instance...
The database manager instance was created successfully.

Formatting the server database...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 5208.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server's database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown.

Format completed with return code 0
Beginning initial configuration...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 8741.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1636W The server machine GUID changed: old value (), new value (f0.8a.27.61-
.e5.43.b6.11.92.b5.00.0a.f7.49.31.18).
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server
password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2200I Storage pool BACKUPPOOL defined (device class DISK).
ANR2200I Storage pool ARCHIVEPOOL defined (device class DISK).
ANR2200I Storage pool SPACEMGPOOL defined (device class DISK).
ANR2560I Schedule manager started.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
ANR2094I Server name set to TSM0.
ANR4865W The server name has been changed. Windows clients that use "passworda-
ccess generate" may be unable to authenticate with the server.
ANR2068I Administrator ADMIN registered.
ANR2076I System privilege granted to administrator ADMIN.
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.

Configuration is complete.

Modify IBM TSM Server Startup Script

Modified startup script to properly work with Veritas Cluster Server with modification in blue below.

[root@300 ~]# cat /etc/init.d/tsm0
#!/bin/bash
#
# dsmserv       Start/Stop IBM Tivoli Storage Manager
#
# chkconfig: - 90 10
# description: Starts/Stops an IBM Tivoli Storage Manager Server instance
# processname: dsmserv
# pidfile: /var/run/dsmserv_instancename.pid

#***********************************************************************
# Distributed Storage Manager (ADSM)                                   *
# Server Component                                                     *
#                                                                      *
# IBM Confidential                                                     *
# (IBM Confidential-Restricted when combined with the Aggregated OCO   *
# Source Modules for this Program)                                     *
#                                                                      *
# OCO Source Materials                                                 *
#                                                                      *
# 5765-303 (C) Copyright IBM Corporation 1990, 2009                    *
#***********************************************************************

#
# This init script is designed to start a single Tivoli Storage Manager
# server instance on a system where multiple instances might be running.
# It assumes that the name of the script is also the name of the instance
# to be started (or, if the script name starts with Snn or Knn, where 'n'
# is a digit, that the name of the instance is the script name with the
# three letter prefix removed).
#
# To use the script to start multiple instances, install multiple copies
# of the script in /etc/rc.d/init.d, naming each copy after the instance
# it will start.
#
# The script makes a number of simplifying assumptions about the way
# the instance is set up.
# - The Tivoli Storage Manager Server instance runs as a non-root user whose
#   name is the instance name
# - The server's instance directory (the directory in which it keeps all of
#   its important state information) is in a subdirectory of the home
#   directory called tsminst1.
# If any of these assumptions are not valid, then the script will require
# some modifications to work.  To start with, look at the
# instance, instance_user, and instance_dir variables set below...

# First of all, check for syntax
if [[ $# != 1 ]]
then
  echo $"Usage: $0 {start|stop|status|restart}"
  exit 1
fi

prog="dsmserv"
instance=tsm0
serverBinDir="/opt/tivoli/tsm/server/bin"

if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system ($serverBinDir/$prog)"
   exit -1
fi

# see if $0 starts with Snn or Knn, where 'n' is a digit.  If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${instance:0:1} == S ]]
then
  instance=${instance#S[0123456789][0123456789]}
elif [[ ${instance:0:1} == K ]]
then
  instance=${instance#K[0123456789][0123456789]}
fi

instance_home=`${serverBinDir}/dsmfngr $instance 2>/dev/null`
if [[ -z "$instance_home" ]]
then
  instance_home="/home/${instance}"
fi
instance_user=tsm0
instance_dir=/tsm0
pidfile="/var/run/${prog}_${instance}.pid"

PATH=/sbin:/bin:/usr/bin:/usr/sbin:$serverBinDir

#
# Do some basic error checking before starting the server
#
# Is the server installed?
if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system"
   exit 0
fi

# Does the instance directory exist?
if [[ ! -d $instance_dir ]]
then
 echo "Instance directory ${instance_dir} does not exist"
 exit -1
fi
rc=0

SLEEP_INTERVAL=5
MAX_SLEEP_TIME=10

function check_pid_file()
{
    test -f $pidfile
}

function check_process()
{
    ps -p `cat $pidfile` > /dev/null
}

function check_running()
{
    check_pid_file && check_process
}

start() {
        # set the standard value for the user limits
        ulimit -c unlimited
        ulimit -d unlimited
        ulimit -f unlimited
        ulimit -n 65536
        ulimit -t unlimited
        ulimit -u 16384

        echo -n "Starting $prog instance $instance ... "
        #if we're already running, say so
        status 0
        if [[ $g_status == "running" ]]
        then
           echo "$prog instance $instance already running..."
           exit 0
        else
           $serverBinDir/rc.dsmserv -u $instance_user -i $instance_dir -q >/dev/null 2>&1 &
           # give enough time to server to start
           sleep 5
           # if the lock file got created, we did ok
           if [[ -f $instance_dir/dsmserv.v6lock ]]
           then
              gawk --source '{print $4}' $instance_dir/dsmserv.v6lock>$pidfile
              [ $? = 0 ] && echo "Succeeded" || echo "Failed"
              rc=$?
              echo
              [ $rc -eq 0 ] && touch /var/lock/subsys/${instance}
              return $rc
           else
              echo "Failed"
              return 1
           fi
       fi
}

stop() {
        echo  "Stopping $prog instance $instance ..."
        if [[ -e $pidfile ]]
        then
           # make sure someone else didn't kill us already
           progpid=`cat $pidfile`
           running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
           if [[ -n $running ]]
           then
              #echo "executing cmd kill `cat $pidfile`"
              kill `cat $pidfile`

              total_slept=0
              while check_running; do \
                  echo  "$prog instance $instance still running, will check after $SLEEP_INTERVAL seconds"
                  sleep $SLEEP_INTERVAL
                  total_slept=`expr $total_slept + 1`

                  if [ "$total_slept" -gt "$MAX_SLEEP_TIME" ]; then \
                      break
                  fi
              done

              if  check_running
              then
                echo "Unable to stop $prog instance $instance"
                exit 1
              else
                echo "$prog instance $instance stopped Successfully"
              fi
           fi
           # remove the pid file so that we don't try to kill same pid again
           rm $pidfile
           if [[ $? != 0 ]]
           then
              echo "Process $prog instance $instance stopped, but unable to remove $pidfile"
              echo "Be sure to remove $pidfile."
              exit 1
           fi
        else
           echo "$prog instance $instance is not running."
        fi
        rc=$?
        echo
        [ $rc -eq 0 ] && rm -f /var/lock/subsys/${instance}
        return $rc
}

status() {
      # check usage
      if [[ $# != 1 ]]
      then
         echo "$0: Invalid call to status routine. Expected argument: "
         echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
         exit 100
         # exit 1
      fi
      #see if file $pidfile exists
      # if it does, see if process is running
      # if it doesn't, it's not running - or at least was not started by dsmserv.rc
      if [[ -e $pidfile ]]
      then
         progpid=`cat $pidfile`
         running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
         if [[ -n $running ]]
         then
            g_status="running"
         else
            g_status="stopped"
            # remove the pidfile if stopped.
            if [[ -e $pidfile ]]
            then
                rm $pidfile
                if [[ $? != 0 ]]
                then
                    echo "$prog instance $instance stopped, but unable to remove $pidfile"
                    echo "Be sure to remove $pidfile."
                fi
            fi
         fi
      else
        g_status="stopped"
      fi
      if [[ $1 == 1 ]]
      then
            echo "Status of $prog instance $instance: $g_status"
      fi

      if [ "${1}" = "1" ]
      then
        case ${g_status} in
          (stopped) EXIT=100 ;;
          (running) EXIT=110 ;;
        esac
        exit ${EXIT}
      fi
}

restart() {
        stop
        start
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  status)
        status 1
        ;;
  restart|reload)
        restart
        ;;
  *)
        echo $"Usage: $0 {start|stop|status|restart}"
        exit 1
esac

exit $?

… and the diff(1) between original and modified one.

[root@300 ~]# diff -u /etc/init.d/tsm0 /root/tsm0
--- /etc/init.d/tsm0    2016-07-13 13:20:43.000000000 +0200
+++ /root/tsm0          2016-07-13 13:27:41.000000000 +0200
@@ -207,7 +207,8 @@
       then
          echo "$0: Invalid call to status routine. Expected argument: "
          echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
-         exit 1
+         exit 100
+         # exit 1
       fi
       #see if file $pidfile exists
       # if it does, see if process is running
@@ -239,6 +240,15 @@
       then
             echo "Status of $prog instance $instance: $g_status"
       fi
+
+      if [ "${1}" = "1" ]
+      then
+        case ${g_status} in
+          (stopped) EXIT=100 ;;
+          (running) EXIT=110 ;;
+        esac
+        exit ${EXIT}
+      fi
 }

 restart() {

Copy tsm0 Profile to the Other Node

[root@300 ~]# pwd
/home
[root@300 /home]# tar -czf - tsm0 | ssh 301 'tar -C /home -xzf -'
[root@300 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@301 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0

IBM TSM Server Start

[root@300 ~]# hares -online TSM0_ip_bond0         -sys 300
[root@300 ~]# hares -online TSM0_mnt_active_log   -sys 300
[root@300 ~]# hares -online TSM0_mnt_archive_log  -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_01        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_02        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_03        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_instance     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_01     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_02     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_03     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_04     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_05     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_06     -sys 300
[root@300 ~]# hares -state | grep TSM0 | grep 300
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@300 ~]# cat >> /etc/services << __EOF
DB2_tsm0        60000/tcp
DB2_tsm0_1      60001/tcp
DB2_tsm0_2      60002/tcp
DB2_tsm0_3      60003/tcp
DB2_tsm0_4      60004/tcp
DB2_tsm0_END    60005/tcp
__EOF
[root@300 ~]# hagrp -freeze TSM0_site
[root@300 ~]# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  300            RUNNING              0
A  301            RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  NSR_site        300            Y          N               OFFLINE
B  NSR_site        301            Y          N               ONLINE
B  RMAN_site       300            Y          N               OFFLINE
B  RMAN_site       301            Y          N               ONLINE
B  TSM0_site       300            Y          N               PARTIAL
B  TSM0_site       301            Y          N               OFFLINE
B  VCS_site        300            Y          N               OFFLINE
B  VCS_site        301            Y          N               ONLINE

-- GROUPS FROZEN
-- Group

C  TSM0_site

-- RESOURCES DISABLED
-- Group           Type            Resource

H  TSM0_site      Application     TSM0_server
H  TSM0_site      DiskGroup       TSM0_dg
H  TSM0_site      IP              TSM0_ip_bond0
H  TSM0_site      Mount           TSM0_mnt_active_log
H  TSM0_site      Mount           TSM0_mnt_archive_log
H  TSM0_site      Mount           TSM0_mnt_db_01
H  TSM0_site      Mount           TSM0_mnt_db_02
H  TSM0_site      Mount           TSM0_mnt_db_03
H  TSM0_site      Mount           TSM0_mnt_db_backup_01
H  TSM0_site      Mount           TSM0_mnt_db_backup_02
H  TSM0_site      Mount           TSM0_mnt_db_backup_03
H  TSM0_site      Mount           TSM0_mnt_instance
H  TSM0_site      Mount           TSM0_mnt_pool0_01
H  TSM0_site      Mount           TSM0_mnt_pool0_02
H  TSM0_site      Mount           TSM0_mnt_pool0_03
H  TSM0_site      Mount           TSM0_mnt_pool0_04
H  TSM0_site      Mount           TSM0_mnt_pool0_05
H  TSM0_site      Mount           TSM0_mnt_pool0_06
H  TSM0_site      NIC             TSM0_nic_bond0

[root@300 ~]# su - tsm0 -c '/opt/tivoli/tsm/server/bin/dsmserv -i /tsm0'
ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 9834.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message
catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1635I The server machine GUID, 54.80.e8.50.e4.48.e6.11.8e.6d.00.0a.f7.49.2b.08, has
initialized.
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2803I License manager started.
ANR8200I TCP/IP Version 4 driver ready for connection with clients on port 1500.
ANR9639W Unable to load Shared License File dsmreg.sl.
ANR9652I An EVALUATION LICENSE for IBM System Storage Archive Manager will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Basic Edition will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Extended Edition will expire on
08/13/2016.
ANR2828I Server is licensed to support IBM System Storage Archive Manager.
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.
ANR2560I Schedule manager started.
ANR0984I Process 1 for EXPIRE INVENTORY (Automatic) started in the BACKGROUND at 01:58:03 PM.
ANR0811I Inventory client file expiration started as process 1.
ANR0167I Inventory file expiration process 1 processed for 0 minutes.
ANR0812I Inventory file expiration process 1 completed: processed 0 nodes, examined 0 objects,
deleting 0 backup objects, 0 archive objects, 0 DB backup volumes, and 0 recovery plan files. 0
objects were retried and 0 errors were encountered.
ANR0985I Process 1 for EXPIRE INVENTORY (Automatic) running in the BACKGROUND completed with
completion state SUCCESS at 01:58:03 PM.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
TSM:TSM0>q admin
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY ADMIN

Administrator        Days Since       Days Since      Locked?       Privilege Classes
Name                Last Access     Password Set
--------------     ------------     ------------     ----------     -----------------------
ADMIN                        <1               <1         No         System
ADMIN_CENTER                 halt
ANR2017I Administrator SERVER_CONSOLE issued command: HALT
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.
ANR0991I Server shutdown complete.


[root@300 ~]# hagrp -unfreeze TSM0_site

[root@300 ~]# hares -state | grep TSM0 | grep 302
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@301 ~]# hares -online TSM0_server -sys 300

Ignore these errors below during first IBM TSM server startup.

IGNORE | ERRORS TO IGNORE DURING FIRST IBM TSM SERVER START
IGNORE | 
IGNORE | DBI1306N  The instance profile is not defined.
IGNORE |
IGNORE | Explanation:
IGNORE |
IGNORE | The instance is not defined in the target machine registry.
IGNORE |
IGNORE | User response:
IGNORE |
IGNORE | Specify an existing instance name or create the required instance.

Install IBM TSM Server Licenses

Screenshots from that process below.

ibm-tsm-install-license-01

ibm-tsm-install-license-02

ibm-tsm-install-license-03

ibm-tsm-install-license-04

Lets now register licenses for the IBM TSM.

tsm: TSM0_SITE>register license file=/opt/tivoli/tsm/server/bin/tsmee.lic
ANR2852I Current license information:
ANR2853I New license information:
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.

IBM TSM Client Configuration on the IBM TSM Server Nodes

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

Install lin_tape on IBM TSM Server

[root@ALL]# uname -r
2.6.32-504.el6.x86_64

[root@ALL]# uname -r | sed 's|.x86_64||g'
2.6.32-504.el6

[root@ALL]# yum --showduplicates list kernel-devel | grep 2.6.32-504.el6
kernel-devel.x86_64            2.6.32-504.el6                 rhel-6-server-rpms

[root@ALL]# yum install rpm-build kernel-devel-2.6.32-504.el6

[root@ALL]# rpm -Uvh /root/rpmbuild/RPMS/x86_64/lin_tape-3.0.10-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_tape               ########################################### [100%]
Starting lin_tape...
lin_tape loaded

[root@ALL]# rpm -Uvh lin_taped-3.0.10-rhel6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_taped              ########################################### [100%]
Starting lin_tape...
lin_taped loaded

[root@ALL]# /etc/init.d/lin_tape start
Starting lin_tape... lin_taped already running. Abort!

[root@ALL]# /etc/init.d/lin_tape restart
Shutting down lin_tape... lin_taped unloaded
Starting lin_tape...

Library Configuration

This is quite unusual configuration as the IBM TS3310 library with 4 LTO4 drives are logically partitioned into two logical libraries with 2 drives dedicated to Dell/EMC Networker and 2 drives dedicated to the IBM TSM server. Such library is shown below.

ibm-tsm-ts3310.jpg

The changers and tape drives for each backup system.

Networker | (L) 000001317577_LLA changer0
TSM       | (L) 000001317577_LLB changer1_persistent_TSM0
Networker | (1) 7310132058       tape0
Networker | (2) 7310295146       tape1
TSM       | (3) 7310214751       tape2_persistent_TSM0
TSM       | (4) 7310214904       tape3_persistent_TSM0
[root@300 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape3
/dev/IBMtape3n

We will use UDEV for persistent configuration.

[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape0)    | grep -i serial
    ATTR{serial_num}=="7310132058"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape1)    | grep -i serial
    ATTR{serial_num}=="7310295146"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape2)    | grep -i serial
    ATTR{serial_num}=="7310214751"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape3)    | grep -i serial
    ATTR{serial_num}=="7310214904"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger0) | grep -i serial
    ATTR{serial_num}=="000001317577_LLA"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger1) | grep -i serial
    ATTR{serial_num}=="000001317577_LLB"
[root@300 ~]# cat /proc/scsi/IBM*
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Changer Devices:
Number  model       SN                HBA             SCSI            FO Path
0       3576-MTL    000001317577_LLA  qla2xxx         2:0:1:1         NA
1       3576-MTL    000001317577_LLB  qla2xxx         4:0:1:1         NA
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Tape Devices:
Number  model       SN                HBA             SCSI            FO Path
0       ULT3580-TD4 7310132058        qla2xxx         2:0:0:0         NA
1       ULT3580-TD4 7310295146        qla2xxx         2:0:1:0         NA
2       ULT3580-TD4 7310214751        qla2xxx         4:0:0:0         NA
3       ULT3580-TD4 7310214904        qla2xxx         4:0:1:0         NA

[root@300 ~]# cat /etc/udev/rules.d/98-lin_tape.rules
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310132058", MODE="0660", SYMLINK="IBMtape0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310295146", MODE="0660", SYMLINK="IBMtape1"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214751", MODE="0660", SYMLINK="IBMtape2_persistent_TSM0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214904", MODE="0660", SYMLINK="IBMtape3_persistent_TSM0"
KERNEL=="IBMchanger*", ATTR{serial_num}=="000001317577_LLB", MODE="0660", SYMLINK="IBMchanger1_persistent_TSM0"

[root@301 ~]# /etc/init.d/lin_tape stop
Shutting down lin_tape... lin_taped unloaded

[root@301 ~]# rmmod lin_tape

[root@301 ~]# /etc/init.d/lin_tape start
Starting lin_tape...

New persistent devices.

[root@301 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMchanger1_persistent_TSM0
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape2_persistent_TSM0
/dev/IBMtape3
/dev/IBMtape3n
/dev/IBMtape3_persistent_TSM0

Lets update the paths to the tape drives now.

tsm: TSM0_SITE>query path f=d

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: TS3310
              Destination Type: LIBRARY
                       Library:
                     Node Name:
                        Device: /dev/IBMchanger0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): ADMIN
         Last Update Date/Time: 09/16/2014 13:36:14

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE0
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 14:02:02

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE1
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape1
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 13:59:48

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=no
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary device=/dev/IBMchanger1_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=yes
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=no
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape2_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE0           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE0 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape3_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.


Lets verify that our library works properly.

tsm: TSM0_SITE>audit library TS3310 checklabel=barcode
ANS8003I Process number 2 started.

tsm: TSM0_SITE>query proc

Process      Process Description      Process Status
  Number
--------     --------------------     -------------------------------------------------
       2     AUDIT LIBRARY            ANR8459I Auditing volume inventory for library
                                       TS3310.


tsm: TSM0_SITE>query act
(...)

08/04/2016 14:30:41      ANR2017I Administrator ADMIN issued command: AUDIT
                          LIBRARY TS3310 checklabel=barcode  (SESSION: 8)
08/04/2016 14:30:41      ANR0984I Process 2 for AUDIT LIBRARY started in the
                          BACKGROUND at 02:30:41 PM. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:41      ANR8457I AUDIT LIBRARY: Operation for library TS3310
                          started as process 2. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:46      ANR8358E Audit operation is required for library TS3310.
                          (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:51      ANR8439I SCSI library TS3310 is ready for operations.
                          (SESSION: 8, PROCESS: 2)

(...)

08/04/2016 14:31:26      ANR0985I Process 2 for AUDIT LIBRARY running in the
                          BACKGROUND completed with completion state SUCCESS at
                          02:31:26 PM. (SESSION: 8, PROCESS: 2)

(...)

IBM TSM Storage Pool Configuration

IBM TSM container storage pool creation.

tsm: TSM0_SITE>define stgpool POOL0_stgFC stgtype=directory
ANR2249I Storage pool POOL0_stgFC is defined.

tsm: TSM0_SITE>define stgpooldirectory POOL0_stgFC /tsm0/pool0/pool0_01,/tsm0/pool0/pool0_02,/tsm0/pool0/pool0_03,/tsm0/pool0/pool0_04,/tsm0/pool0/pool0_05,/tsm0/pool0/pool0_06
ANR3254I Storage pool directory /tsm0/pool0/pool0_01 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_02 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_03 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_04 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_05 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_06 was defined in storage pool POOL0_stgFC.

tsm: TSM0_SITE>q stgpooldirectory

Storage Pool Name     Directory                                         Access
-----------------     ---------------------------------------------     ------------
POOL0_stgFC           /tsm0/pool0/pool0_01                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_02                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_03                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_04                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_05                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_06                              Read/Write


IBM TSM Backup Policies Configuration

Below is an example policy.

tsm: TSM0_SITE>def dom  FS backret=30 archret=30
ANR1500I Policy domain FS defined.

tsm: TSM0_SITE>def pol  FS FS
ANR1510I Policy set FS defined in policy domain FS.

tsm: TSM0_SITE>def mg   FS FS FS_1DAY
ANR1520I Management class FS_1DAY defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1DAY   STANDARD type=backup destination=POOL0_STGFC verexists=32 verdeleted=1 retextra=31 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1DAY.

tsm: TSM0_SITE>def mg   FS FS FS_1MONTH
ANR1520I Management class FS_1MONTH defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1MONTH STANDARD type=backup destination=POOL0_STGFC  verexists=4 verdeleted=1 retextra=91 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1MONTH.

tsm: TSM0_SITE>as defmg FS FS FS_1DAY
ANR1538I Default management class set to FS_1DAY for policy domain FS, set FS.

tsm: TSM0_SITE>act pol  FS FS
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.

Do you wish to proceed? (Yes (Y)/No (N)) y
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.
ANR1514I Policy set FS activated in policy domain FS.



I hope that the amount of instructions did not discouraged you from one of the best enterprise backup systems – the IBM TSM (now IBM Spectrum Protect) and on of the best high availability cluster – the Veritas Cluster Server ๐Ÿ™‚

EOF

Syncthing on FreeBSD

This article will show you how to setup Syncthing on FreeBSD system.

syncthing-logo.png

One warning at the beginning – all > and < characters in the Syncthing configuration file were changed to } and { respectively. This is because of WordPress limitation. Remember that Syncthing config is XML file.

For most of my personal backup needs I always use rsync(1) but on the limited devices such as phones or tablets its real PITA. Thus for the automated import of the photos and other files from such devices I prefer to use Syncthing tool.

If you haven’t heard about it yet I will cite the Syncthing https://syncthing.net/ site. “Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it’s transmitted over the Internet.” … and Wikipedia “Syncthing is a free, open-source peer-to-peer file synchronization application available for Windows, Mac, Linux, Android, Solaris, Darwin, and BSD. It can sync files between devices on a local network, or between remote devices over the Internet. Data security and data safety are built into the design of the software.”

One may ask how its different from Nextcloud for example. Well, with Nextcloud you have almost ‘entire’ cloud stack with custom applications at your disposal. With Syncthing you have synchronization tool between devices and nothing more.

Initially I wanted – similarly like with Nextcloud on FreeBSD – to setup everything in a FreeBSD Jail. The problem is Syncthing does not work in a FreeBSD Jails virtualization as I figured out after several hours of trying to find out what is wrong. The management interface of Syncthing was working as expected and was accessible but the Syncthing on the Android mobile phone was not able to connect/sync with the Syncthing instance in the FreeBSD Jail. Sure I could connect to the Syncthing management interface from the phone but still could not do any backup using Syncthing protocol. Knowing this limitation you have 3 options to choose from:

  • Setup Syncthing on FreeBSD host like any other service.
  • Use FreeBSD Bhyve virtualization for Syncthing instance.
  • Use VirtualBox package/port for Syncthing instance.

I have chosen the first option. It is actually the same for Bhyve and VirtualBox but additional work is needed with virtualization layer. I will use Android based mobile phone as an example for the Syncthing client but you can sync data between computers as well.

One more thing, there is no such thing as Syncthing server and Syncthing client. All Syncthing instances/installations are the same, You can just add/remove devices and directories to synchronize between those devices. I used term ‘client’ above to show that I will be automating of copying the files from phone to FreeBSD server with Syncthing instance, nothing more.

Host

Here are some basic steps that I have done on the FreeBSD host. Things like aliases database, timezone, DNS and basic FreeBSD settings at its /etc/rc.conf core file.

# newaliases -v
/etc/mail/aliases: 29 aliases, longest 10 bytes, 297 bytes total

# ln -s /usr/share/zoneinfo/Europe/Warsaw /etc/localtime

# date
Fri Aug 17 22:05:18 CEST 2018

# echo nameserver 1.1.1.1 > /etc/resolv.conf

# ping -c 3 freebsd.org
PING freebsd.org (96.47.72.84): 56 data bytes
64 bytes from 96.47.72.84: icmp_seq=0 ttl=51 time=117.918 ms
64 bytes from 96.47.72.84: icmp_seq=1 ttl=51 time=115.169 ms
64 bytes from 96.47.72.84: icmp_seq=2 ttl=51 time=115.392 ms

--- freebsd.org ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 115.169/116.160/117.918/1.247 ms

… and the main FreeBSD configuration file.

# cat /etc/rc.conf
# NETWORK
  hostname=blackbox.local
  ifconfig_re0="inet 10.0.0.100/24 up"
  defaultrouter="10.0.0.1"

# DAEMONS | YES
  zfs_enable=YES
  sshd_enable=YES
  ntpd_enable=YES
  syncthing_enable=YES
  syslogd_flags="-s -s"

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  dumpdev=NO
  update_motd=NO
  virecover_enable=NO
  clear_tmp_enable=YES

Install

First we will switch from quarterly to the latest pkg(8) branch to get the most up to date packages.

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",

# sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

We will now bootstrap pkg(8) and then update its database to latest available one.

# env ASSUME_ALWAYS_YES=yes pkg update -f
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[syncthing.local] Installing pkg-1.10.5_1...
[syncthing.local] Extracting pkg-1.10.5_1: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[syncthing.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[syncthing.local] Fetching packagesite.txz: 100%    6 MiB 352.7kB/s    00:19    
Processing entries: 100%
FreeBSD repository update completed. 32388 packages processed.
All repositories are up to date.

… and then install the Syncthing from pkg(8) packages.

# pkg install -y syncthing 
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        syncthing: 0.14.48

Number of packages to be installed: 1

The process will require 88 MiB more space.
15 MiB to be downloaded.
[1/1] Fetching syncthing-0.14.48.txz: 100%   15 MiB 525.3kB/s    00:29    
Checking integrity... done (0 conflicting)
[1/1] Installing syncthing-0.14.48...
===> Creating groups.
Creating group 'syncthing' with gid '983'.
===> Creating users
Creating user 'syncthing' with uid '983'.
[1/1] Extracting syncthing-0.14.48: 100%
Message from syncthing-0.14.48:

WARNING: This version is not backwards compatible with 0.13.x, 0.12.x, 0.11.x
nor 0.10.x releases!

For more information, please read:

https://forum.syncthing.net/t/syncthing-v0-14-0/7806
https://github.com/syncthing/syncthing/releases/tag/v0.13.0
https://forum.syncthing.net/t/syncthing-v0-11-0-release-notes/2426
https://forum.syncthing.net/t/syncthing-syncthing-v0-12-0-beryllium-bedbug/6026

The Syncthing package created a syncthing user and group for us.

# id syncthing
uid=983(syncthing) gid=983(syncthing) groups=983(syncthing)

Look how small the Syncthing is, these are all files installed by the net/syncthing package.

# pkg info -l syncthing
syncthing-0.14.48:
        /usr/local/bin/stbench
        /usr/local/bin/stcli
        /usr/local/bin/stcompdirs
        /usr/local/bin/stdisco
        /usr/local/bin/stdiscosrv
        /usr/local/bin/stevents
        /usr/local/bin/stfileinfo
        /usr/local/bin/stfinddevice
        /usr/local/bin/stgenfiles
        /usr/local/bin/stindex
        /usr/local/bin/strelaypoolsrv
        /usr/local/bin/strelaysrv
        /usr/local/bin/stsigtool
        /usr/local/bin/sttestutil
        /usr/local/bin/stvanity
        /usr/local/bin/stwatchfile
        /usr/local/bin/syncthing
        /usr/local/etc/rc.d/syncthing
        /usr/local/etc/rc.d/syncthing-discosrv
        /usr/local/etc/rc.d/syncthing-relaypoolsrv
        /usr/local/etc/rc.d/syncthing-relaysrv
        /usr/local/share/doc/syncthing/AUTHORS
        /usr/local/share/doc/syncthing/LICENSE
        /usr/local/share/doc/syncthing/README.md

Configuration

As shows above we already have syncthing_enable=YES added to the /etc/rc.conf file.

# /usr/local/etc/rc.d/syncthing rcvar
# syncthing
#
syncthing_enable="NO"
#   (default: "")

# grep syncthing_enable /etc/rc.conf
  syncthing_enable=YES

Also from the Syncthing rc(8) startup script you may check other startup options.

# less -N /usr/local/etc/rc.d/syncthing
(...)
      9 # Add the following lines to /etc/rc.conf.local or /etc/rc.conf
     10 # to enable this service:
     11 #
     12 # syncthing_enable (bool):      Set to NO by default.
     13 #                               Set it to YES to enable syncthing.
     14 # syncthing_home (path):        Directory where syncthing configuration
     15 #                               data is stored.
     16 #                               Default: /usr/local/etc/syncthing
     17 # syncthing_log_file (path):    Syncthing log file
     18 #                               Default: /var/log/syncthing.log
     19 # syncthing_user (user):        Set user to run syncthing.
     20 #                               Default is "syncthing".
     21 # syncthing_group (group):      Set group to run syncthing.
     22 #                               Default is "syncthing".
(...)

The Syncthing needs /var/log/syncthing.log log file. Lets then create it and set proper owner and rights for it.

# ls /var/log/syncthing.log
ls: /var/log/syncthing.log: No such file or directory

# :> /var/log/syncthing.log

# chown syncthing:syncthing /var/log/syncthing.log

# ls -l /var/log/syncthing.log
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:06 /var/log/syncthing.log

As we will be using this log file we also need to take care of its rotation, we will use builtin FreeBSD newsyslog(8) daemon for that purpose.

# cat > /etc/newsyslog.conf.d/syncthing << __EOF
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC
__EOF

# cat /etc/newsyslog.conf.d/syncthing
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC

# newsyslog -v | grep syncthing
Processing /etc/newsyslog.conf.d/syncthing
/var/log/syncthing.log : size (Kb): 0 [100] --> skipping

Lets try to start Syncthing for the first time.

# service syncthing start
Starting syncthing.
daemon: pidfile ``/var/run/syncthing.pid'': Permission denied
/usr/local/etc/rc.d/syncthing: WARNING: failed to start syncthing

Seems that rc(8) Syncthing startup does not create PID file automatically, lets create it then.

 
# :> /var/run/syncthing.pid

# chown syncthing:syncthing /var/run/syncthing.pid

# ls -l /var/run/syncthing.pid
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:08 /var/run/syncthing.pid

Now lets try to start Syncthing again.

# service syncthing start
Starting syncthing.

Better. Lets see what ports does it use.

# sockstat -l -4 | grep syncthing
syncthing syncthing 27499 9  tcp46  *:22000               *:*
syncthing syncthing 27499 10 udp4   *:18876               *:*
syncthing syncthing 27499 13 udp4   *:21027               *:*
syncthing syncthing 27499 20 tcp4   127.0.0.1:8384        *:*

… and check its log file.

# cat /var/log/syncthing.log
[start] 01:08:40 INFO: Generating ECDSA key and certificate for syncthing...
[MPN4S] 01:08:40 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:08:40 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:08:41 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:08:41 INFO: Default folder created and/or linked to new config
[MPN4S] 01:08:41 INFO: Default config saved. Edit /usr/local/etc/syncthing/config.xml to taste or use the GUI
[MPN4S] 01:08:42 INFO: Hashing performance is 112.85 MB/s
[MPN4S] 01:08:42 INFO: Updating database schema version from 0 to 2...
[MPN4S] 01:08:42 INFO: Updated symlink type for 0 index entries and added 0 invalid files to global list
[MPN4S] 01:08:42 INFO: Finished updating database schema version from 0 to 2
[MPN4S] 01:08:42 INFO: No stored folder metadata for "default": recalculating
[MPN4S] 01:08:42 WARNING: Creating directory for "Default Folder" (default): mkdir /Sync/: permission denied
[MPN4S] 01:08:42 WARNING: Creating folder marker: folder path missing
[MPN4S] 01:08:42 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:08:42 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:08:42 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v4.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v6.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery.syncthing.net/v2/?noannounce&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:08:42 INFO: Relay listener (dynamic+https://relays.syncthing.net/endpoint) starting
[MPN4S] 01:08:42 WARNING: Error on folder "Default Folder" (default): folder path missing
[MPN4S] 01:08:42 INFO: Failed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:08:42 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:08:42 INFO: Loading HTTPS certificate: open /usr/local/etc/syncthing/https-cert.pem: no such file or directory
[MPN4S] 01:08:42 INFO: Creating new HTTPS certificate
[MPN4S] 01:08:42 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:08:42 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/
[MPN4S] 01:08:55 INFO: Joined relay relay://11.12.13.14:443
[MPN4S] 01:09:02 INFO: Detected 1 NAT service

We have several WARNING messages here about default /Sync directory. Lets fix those.

# service syncthing stop
Stopping syncthing.
Waiting for PIDS: 27498.

Upon first Syncthing start the rc(8) startup script created the /usr/local/etc/syncthing directory with its configuration.

# find /usr/local/etc/syncthing
/usr/local/etc/syncthing
/usr/local/etc/syncthing/https-cert.pem
/usr/local/etc/syncthing/https-key.pem
/usr/local/etc/syncthing/cert.pem
/usr/local/etc/syncthing/key.pem
/usr/local/etc/syncthing/config.xml
/usr/local/etc/syncthing/index-v0.14.0.db
/usr/local/etc/syncthing/index-v0.14.0.db/MANIFEST-000000
/usr/local/etc/syncthing/index-v0.14.0.db/LOCK
/usr/local/etc/syncthing/index-v0.14.0.db/000001.log
/usr/local/etc/syncthing/index-v0.14.0.db/LOG
/usr/local/etc/syncthing/index-v0.14.0.db/CURRENT

Now lets get back to fixing the WARNING for the /Sync directory.

# grep '/Sync' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="//Sync" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

# ls /Sync
ls: /Sync: No such file or directory

Now lets create dedicated directory for our Syncthing instance and set it also in the /usr/local/etc/syncthing/config.xml config file.

# mkdir /syncthing

# chown syncthing:syncthing /syncthing

# chmod 750 /syncthing

# vi /usr/local/etc/syncthing/config.xml

# grep '/syncthing' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="/syncthing" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

We will also disable Relay and Global Announce Server but we will left Local Announce Server enabled.

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}true{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# vi /usr/local/etc/syncthing/config.xml

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}false{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}true{/globalAnnounceEnabled}

# vi /usr/local/etc/syncthing/config.xml

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}false{/globalAnnounceEnabled}

Before restarting Syncthing lets clean the /var/log/syncthing.log file to eliminate now unneeded information.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

Lets check what the log holds for us now.

# cat /var/log/syncthing.log
[MPN4S] 01:13:38 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:13:38 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:13:39 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:13:40 INFO: Hashing performance is 112.97 MB/s
[MPN4S] 01:13:40 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:13:40 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:13:40 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:13:40 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:13:40 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:13:40 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:13:40 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:13:40 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/

We can see that the management interface listens on HTTP not HTTPS because tls option is set to false. We will also need to switch the management interface address from localhost (127.0.0.1) to our IP address (10.0.0.100).

# grep -B 1 -A 3 127.0.0.1 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="false" debugging="false"}
        {address}127.0.0.1:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

# vi /usr/local/etc/syncthing/config.xml

# grep -B 1 -A 3 10.0.0.100 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="true" debugging="false"}
        {address}10.0.0.100:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

Lets verify our changes now.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

# cat /var/log/syncthing.log
[MPN4S] 01:16:20 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:16:20 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:16:21 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:16:22 INFO: Hashing performance is 113.07 MB/s
[MPN4S] 01:16:22 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:16:22 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:16:22 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:16:22 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:16:22 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:16:22 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:16:22 INFO: GUI and API listening on 10.0.0.100:8384
[MPN4S] 01:16:22 INFO: Access the GUI via the following URL: https://10.0.0.100:8384/
[MPN4S] 01:16:42 INFO: Detected 1 NAT service

The log is now ‘clean’ and we can continue to the browser at the https://10.0.0.100:8384 management interface for the rest of Syncthing configuration. The browser will of course warn us about untrusted HTTPS certificate.

syncthing-01.png

Syncthing will ask us if we agree upon sharing of statistics data. I leave that choice to you.

syncthing-02.png

The Syncthing dashboard welcomes us with big red warning about remote administration being allowed without a password. We will fix that in a moment, click the Settings button in that warning.

syncthing-03

Leave first General tab will unmodified.

syncthing-04.png

On the GUI tab we will create user admin with SYNCTHINGPASSWORD password for the Syncthing management interface. Use something more sensible here ๐Ÿ™‚

syncthing-05.png

I did not modified settings at the Connections tab. Click Save to continue.

syncthing-06.png

Besides setting the user and its password I haven’t changed/set any other options.

We now has Syncthing without errors. You will be prompted for that user and password in a moment. We will now remove Default Folder as its not needed. Hit its Edit button.

syncthing-07.png

Then click the Remove button on the bottom.

syncthing-08.png

… and click Yes for confirmation.

syncthing-09.png

The ’empty’ Syncthing dashboard.

syncthing-10.png

Next we will download, install and configure Syncthing on the Android phone. Depending on your preferences use F-Droid repository or Google Play repository … or just an APK file from the source of your choice. The installed Syncthing application is shown below. Takes about 50 MB.

syncthing-11

Lets start it then, you will see the Welcome message from the Syncthing application.

syncthing-12

Depending on your Android version your phone may ask you to allow Syncthing for various permissions. Agree.

syncthing-13

Same as earlier the Syncthing will ask you if you agree for sharing of the statistics data. I also leave that choice to you.

syncthing-14

The Syncthing will now require restart, tap RESTART NOW to continue.

syncthing-15

By default the Camera directory is preconfigured pointing at /storage/emulated/0/DCIM directory which holds photos and screenshots taken on the phone. Its enough for me so I will use it. Tap the Syncthing hamburger menu button.

syncthing-19

… and select Web GUI option.

syncthing-20

You will see management interface for Syncthing on your Android phone, scroll below to add blackbox.local Syncthing instance from the FreeBSD in the Remote Devices section.

syncthing-21

Now in the Remote Devices section hit the Add Remote Device button.

syncthing-22

Remember that Local Announce service we left enabled? This is when it comes handy. You will have our Syncthing instance ID from FreeBSD displayed as it was automatically detected on the network.

syncthing-23

Click on the displayed ID and enter the blackbox.local hostname.

Besides entering (clicking) ID and hostname I did not set any other options. Click Save.

syncthing-24

The blackbox.local will be added to the Remote Devices list.

syncthing-25

Below are the Camera directory properties. Remember to select blackbox.local as the allowed host (small yellow slider).

syncthing-26

… and the blackbox.local device properties.

syncthing-27

Now let’s get back to the FreeBSD’s Syncthing instance management interface on the browser. You will be prompted to add Syncthing of the Android phone – SM-A320FL in my case – to the devices. Hit green Add Device button.

syncthing-28.png

Click Save without adding other options.

syncthing-29.png

The SM-A320FL device for our Android phone is now visible in the Remote Devices section.

syncthing-30.png

You should now be prompted that SM-A320FL device wants to share Camera directory. Hit green Add button.

syncthing-31.png

Enter SM-A320FL as the folder label and /syncthing/SM-A320FL as the directory name on the FreeBSD Syncthing instance. Also make sure that SM-A320FL is selected in the Share With Devices section on the bottom.

syncthing-32.png

The SM-A320FL device and SM-A320FL folder from this device are now configured. You will first see Out of Sync message for the SM-A320FL folder. The synchronization should now start whose progress can be observed both on the phone and in the management interface of the FreeBSD Syncthing instance in the browser.

syncthing-33.png

The SM-A320FL folder switched status to Syncing with progress.

syncthing-34.png

You will see similar status on the Android phone.

syncthing-36

After some file you will see that SM-A320FL folder has status Up to Date. That means that all files from the Camera directory are synchronized to the FreeBSD Syncthing instance.

syncthing-35

The created/synced directories from the Android phone looks as follows on the FreeBSD Syncthing instance.

# find /syncthing -type d
/syncthing
/syncthing/SM-A320FL
/syncthing/SM-A320FL/Camera
/syncthing/SM-A320FL/Camera/.AutoPortrait
/syncthing/SM-A320FL/Screenshots
/syncthing/SM-A320FL/.thumbnails
/syncthing/SM-A320FL/.stfolder

Now you have your Camera files synced as backup.

The complete Syncthing config from the FreeBSD instance is available /usr/local/etc/syncthing/config.xml here. After download rename it from *.xml.key to *.xml file (WordPress limitation).

UPDATE 1

The Syncthing on FreeBSD article was featured in the BSD Now 262 – OpenBSD Surfacing episode.

Thanks for mentioning!

EOF

Valuable News – 2018/08/04

UNIX

Non-Cross-DSO CFI enabled HardenedBSD 12-CURRENT/arm64 image for Pine64 LTS.
https://twitter.com/lattera/status/1023013287166926848

FreeBSD UEFI Secure Boot.
https://www.freebsdfoundation.org/freebsd-uefi-secure-boot/

KSH Shell Completions.
https://github.com/qbit/dotfiles/blob/master/common/dot_ksh_completions

UNIX – The “Always On” OS.
UNIXยฎ: The โ€œAlways Onโ€ OS

Polish BSD User Group videos/talks from #1 and #2 meetings.
https://www.youtube.com/channel/UCWvDQaEHSULuCIVOOM66nYA/videos?sort=dd

FreeBSD – Chelsio N320E 10G Ethernet.
https://www.boris-tassou.fr/freebsd-chelsio-n320e/ [FRENCH]
https://translate.google.com/translate?sl=auto&tl=en&u=https://www.boris-tassou.fr/freebsd-chelsio-n320e/ [ENGLISH]

Chicken Bit as ‘G-2 Mitigation’ on OpenBSD.
https://twitter.com/OpenBSD_stable/status/1023952724746854400

Haiku OS Working On Updated Drivers From FreeBSD.
https://www.phoronix.com/scan.php?page=news_item&px=Haiku-OS-July-2018

FreeBSD/i386 is now using LLD as bootstrap linker.
https://svnweb.freebsd.org/changeset/base/336901

VIMAGE now enabled by default on FreeBSD 12-CURRENT/arm64.
https://svnweb.freebsd.org/base?view=revision&revision=336915

First OpenWRT 18.06 release since merge with LEDE project.
https://www.phoronix.com/scan.php?page=news_item&px=OpenWRT-18.06-Released

FreeBSD Foundation 2018/07 Development Projects Update.
https://www.freebsdfoundation.org/blog/july-2018-development-projects-update/

FreeNAS 11.2-BETA2 Available.
https://www.ixsystems.com/blog/library/freenas-11-2-beta2/

With security.jail.vmm_allowed its now possible to run bhyve(8) within FreeBSD jail(8).
https://svnweb.freebsd.org/base?view=revision&revision=337023

ZFS File Server.
https://aravindh.net/post/zfs_fileserver/

ZFS Performance.
https://aravindh.net/post/zfs_performance/

Improved ZFS performance on high IOPS workload by 12% for 8k record size on FreeBSD.
https://svnweb.freebsd.org/base?view=revision&revision=337229

Reflection on one year usage of OpenBSD.
https://nanxiao.me/en/reflection-on-one-year-usage-of-openbsd/

The template user with PAM and login(1) on FreeBSD.
http://oshogbo.vexillium.org/blog/48

In Other BSDs for 2018/08/04.
https://www.dragonflydigest.com/2018/08/04/21594.html

FEMP stack on Amazon EC2.
https://staktrace.com/spout/entry.php?id=840

OpenBSD on an Apple iBook G4.
https://bobstechsite.com/openbsd-on-an-ibook-g4/

Hardware

Future of VIA x86 Processors.
https://www.cambus.net/the-future-of-via-x86-processors/

Rise of the Centaur.
https://vimeo.com/ondemand/riseofthecentaur

India first RISC-V based Chip is here.
http://www.geekdave.in/2018/07/indias-first-risc-v-is-here-linux-boots.html

DJ rig with two Amiga 1200.
http://cdm.link/2018/07/dj-mod-amiga-1200-commodore/

Apollo is Amiga Classic accelerator board code compatible Motorola M68K processor but is 3-4 times faster than fastest 68060.
http://www.apollo-accelerators.com/

Xfce 4.14 development and migration to Gtk3 is progressing well.
https://twitter.com/xfceofficial/status/1024979817953935361

Life

Melatonin – Much More Than You Wanted To Know.
https://www.lesswrong.com/posts/E4cKD9iTWHaE7f3AJ/melatonin-much-more-than-you-wanted-to-know

Other

ReactOS is now able to boot from BTRFS.
https://reactos.org/blogs/gsoc-2018-booting-btrfs-works

Evolving the Firefox Brand.
https://blog.mozilla.org/opendesign/evolving-the-firefox-brand/

Valuable News – 2018/07/27

UNIX

In Other BSDs for 2018/07/21.
https://www.dragonflydigest.com/2018/07/21/21525.html

NetBSD 8.0 Released.
https://www.netbsd.org/releases/formal-8/NetBSD-8.0.html

Changes to NetBSD release support policy.
https://mail-index.netbsd.org/netbsd-announce/2018/07/25/msg000290.html

ZFS Zpool Checkpoints.
http://oshogbo.vexillium.org/blog/46/

Hidden Gems of XTERM.
https://lukas.zapletalovi.com/2013/07/hidden-gems-of-xterm.html

Less Known Solaris XTerm Features.
https://twitter.com/vmisev/status/1021525740406403073
https://twitter.com/vmisev/status/1021513491532926983

Extend FreeBSD loader(8) geli support to all architectures and all disk-like devices.

https://svnweb.freebsd.org/base?view=revision&revision=336252

Scripts to create HardenedBSD ISO and KVM image for SmartOS and Triton.
https://github.com/wasted/hardenedbsd-kvm-image-builder

Antergos Linux allows root (/) on ZFS install from installer but /boot remains EXT4.
https://antergos.com/

Slackware Linux hit 25 years recently.
http://www.slackware.com/announce/1.0.php

Configure FreeBSD Jails with vnet (VIMAGE) and ZFS.
https://www.cyberciti.biz/faq/how-to-configure-a-freebsd-jail-with-vnet-and-zfs/

FreeBSD on ARM64.
https://community.online.net/t/freebsd-on-arm64/6678

ZFS for Linux.
https://www.linuxjournal.com/content/zfs-linux

Colored cal(1) output colored with grep(1).
https://twitter.com/vermaden/status/1021690491476340737

Solaris had/have 11 years old privilege escalation bug.
https://www.theregister.co.uk/2018/07/24/oracle_repatch_old_solaris_bug/

OPNsense 18.1.13 Released.
https://forum.opnsense.org/index.php?topic=9237.0

Tribblix m20.5 (ami-7cf2181b) and LX-enabled OmniTribblix m20.5 (ami-90fb11f7) available in the AWS London (eu-west-2) region.
http://www.tribblix.org/aws.html

Because Computers | BSD Now 2^8.
http://www.jupiterbroadcasting.com/126261/because-computers-bsd-now-28/

ZFS Private Beta on Citus Cloud.
https://www.citusdata.com/blog/2018/07/19/ZFS-beta-on-citus-cloud/

Oracle Solaris 11.4 beta progress from 32bit to 64bit.
https://blogs.oracle.com/solaris/oracle-solaris-114-beta-progress-on-lp64-conversion-v2

DragonFly BSD will implement new rc(8) mechanism to run scripts only once.
https://www.dragonflydigest.com/2018/07/25/21549.html

OmniOSce r151026 gets automatic boot-environment naming.
https://omniosce.org/article/auto-be-name

Expose SmartOS metadata to CFEngine.
https://github.com/bahamat/cfengine-smartos-metadata/blob/master/README.md

Sysadmin Guide to Ansible – How to Simplify Tasks.
https://opensource.com/article/18/7/sysadmin-tasks-ansible

More mitigations against speculative execution vulnerabilities from OpenBSD team.
https://undeadly.org/cgi?action=article;sid=20180724072257

FreeBSD images now available for GCE (Google Cloud Engine).
https://googlecloudplatform.uservoice.com/forums/302595-compute-engine/suggestions/18618931-freebsd
https://console.cloud.google.com/marketplace/details/freebsd-cloud/freebsd-11

DTrace on Linux Update.
https://blogs.oracle.com/linux/dtrace-on-linux%3a-an-update

NetBSD on the PineBook.
https://pbs.twimg.com/media/DjB7R23X4AIxr61.jpg:large

FreeBSD kernel module loading mechanism imported into Illumos.
http://src.illumos.org/source/xref/freebsd-head/sys/conf/kmod.mk

PBOY – small CLI tool to rename PDF files with useless names based on the suggestions found in file content and metadata.
https://github.com/2mol/pboy

Hardware

AMD 2018 Q2 Results – Best Quarter In 7 Years.
https://www.anandtech.com/show/13121/amd-announces-q2-2018-results

Why Intel will never let owners control the ME.
https://www.devever.net/~hl/intelme

Tool for partial deblobbing of Intel ME/TXE firmware images.
https://github.com/corna/me_cleaner

Intel x86 Considered Harmful by Joanna Rutkowska.
https://blog.invisiblethings.org/papers/2015/x86_harmful.pdf

Backblaze 2018 Q2 Hard Drive Stats.
https://www.backblaze.com/blog/hard-drive-stats-for-q2-2018/

Life

When We Eat or Dont Eat May Be Critical for Health.
https://www.nytimes.com/2018/07/24/well/when-we-eat-or-dont-eat-may-be-critical-for-health.html

While We Sleep Our Mind Goes on an Amazing Journey.
https://www.nationalgeographic.com/magazine/2018/08/science-of-sleep/

Other

My favorite apps on F-Droid.
https://quaap.com/D/use-fdroid

Why I will never use Windows 8 or Windows 10.
https://www.devever.net/~hl/windows8

Exposing more of ICU to PostgreSQL.
https://postgresql.verite.pro/blog/2018/07/25/icu-extension.html

EOF

.
.
.
.
.