Tag Archives: server

Unbound DNS Blacklist

Today I will show you how to configure unbound(8) to block spam/malicious/malware domains at DNS level.

unbound

I will use FreeBSD for that purpose but you can use any system that unbound(8) runs on.

logo-freebsd

Earlier I used generated /etc/hosts file but that was limited in several ways. The ZSH shell will autocomplete all these blocked domains to the ssh(1)/scp(1) commands (which takes needless time and shows useless completions). Subdomains are not handled. The malicious.com is blocked but ads.malicious.com is not. You need to duplicate all those domains in the /etc/hosts file.

TL;DR

Not all people have time for my long boring stories so this is meritum of this article.

# rm -rf /var/unbound
# mkdir -p /var/unbound/conf.d
# chown -R unbound:unbound /var/unbound
# service local_unbound setup
# service local_unbound enable
# service local_unbound start
# mkdir /root/bin
# cd 
# fetch -o /root/bin/unbound-blacklist-fetch.sh \
> https://raw.githubusercontent.com/vermaden/scripts/master/unbound-blacklist-fetch.sh
# chmod +x /root/bin/unbound-blacklist-fetch.sh
# /root/bin/unbound-blacklist-fetch.sh
# service local_unbound restart
# cat << BSD >> /var/cron/tabs/root
> # FETCH FRESH unbound(8) BLACKLIST
>   0 0 * * * /root/bin/unbound-blacklist-fetch.sh
> BSD

Whole Story

The unbound(8) caching DNS resolver has been added to FreeBSD base system in 2014 with 10.0-RELEASE version so being on FreeBSD you do not need to install anything. We will start with cleaning the any existing unbound(8) configuration which relies at /var/unbound. Keep in mind that /etc/unbound links to it.

# ls -l -d /etc/unbound /var/unbound
lrwxr-xr-x 1 root    wheel   14 2019.09.21 16:23 /etc/unbound -> ../var/unbound
drwxr-xr-x 3 unbound unbound  8 2020.11.17 16:48 /var/unbound

# rm -rf /var/unbound

# mkdir -p /var/unbound/conf.d

# chown -R unbound:unbound /var/unbound

The service local_unbound setup will create all needed configuration.

Just keep in mind that this process will setup all DNS servers that you have in the /etc/resolv.conf file.

You may want to put two of your favorite DNS servers before this process.

Configuration

# cat << BSD > /etc/resolv.conf
nameserver 9.9.9.9
nameserver 1.1.1.1
BSD

# service local_unbound setup
Performing initial setup.
destination: 
Extracting forwarders from /etc/resolv.conf.
/var/unbound/forward.conf created
/var/unbound/lan-zones.conf created
/var/unbound/control.conf created
/var/unbound/unbound.conf created
/etc/resolvconf.conf created
Original /etc/resolv.conf saved as /var/backups/resolv.conf.20201115.235254

# rm /var/backups/resolv.conf.20201115.235254

# find /var/unbound
/var/unbound
/var/unbound/lan-zones.conf
/var/unbound/control.conf
/var/unbound/unbound.conf
/var/unbound/forward.conf

% find /var/unbound -ls
 12685  17  drwxr-xr-x  3  unbound  unbound    8  Nov 17 16:48  /var/unbound
 13072   1  -rw-r--r--  1  root     unbound   98  Nov 17 05:00  /var/unbound/forward.conf
 12688   9  -rw-r--r--  1  root     unbound  354  Nov 15 23:56  /var/unbound/unbound.conf
 12686   1  drwxr-xr-x  2  unbound  unbound    3  Nov 16 00:23  /var/unbound/conf.d
 12158   9  -rw-r--r--  1  root     unbound  193  Nov 15 23:56  /var/unbound/control.conf
 11732   9  -rw-r--r--  1  root     unbound  189  Nov 15 23:56  /var/unbound/lan-zones.conf

# tail -n 999 /var/unbound/*
==> /var/unbound/conf.d <==
tail: /var/unbound/conf.d: Is a directory

==> /var/unbound/control.conf <==
# This file was generated by local-unbound-setup.
# Modifications will be overwritten.
remote-control:
	control-enable: yes
	control-interface: /var/run/local_unbound.ctl
	control-use-cert: no

==> /var/unbound/forward.conf <==
# Generated by resolvconf

forward-zone:
	name: "."
	forward-addr: 9.9.9.9
	forward-addr: 1.1.1.1

==> /var/unbound/lan-zones.conf <==
# This file was generated by local-unbound-setup.
# Modifications will be overwritten.
server:
	# Unblock reverse lookups for LAN addresses
	unblock-lan-zones: yes
	insecure-lan-zones: yes

==> /var/unbound/unbound.conf <==
# This file was generated by local-unbound-setup.
# Modifications will be overwritten.
server:
	username: unbound
	directory: /var/unbound
	chroot: /var/unbound
	pidfile: /var/run/local_unbound.pid
	auto-trust-anchor-file: /var/unbound/root.key

include: /var/unbound/lan-zones.conf
include: /var/unbound/control.conf
include: /var/unbound/conf.d/*.conf

We will now enable the local_unbound service and start it. At this point without any DNS blocking configuration.

# service local_unbound enable
local_unbound enabled in /etc/rc.conf

# service local_unbound start
Starting local_unbound.

The /etc/resolv.conf will now have hour favorite DNS servers hashed/disabled and 127.0.0.1 address will be specified. You can also use sockstat(8) to check that unbound(8) is indeed listening on port 53.

# cat /etc/resolv.conf
# nameserver 9.9.9.9
# nameserver 1.1.1.1
nameserver 127.0.0.1
options edns0

% sockstat -l -4
USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS 
unbound local-unbo 7362 5 udp4 127.0.0.1:53 *:*
unbound local-unbo 7362 6 tcp4 127.0.0.1:53 *:*

Test

After unbound(8) has been enabled it should now be visible that first DNS request should be longer and the second one and following requests should be very fast.

% time host ftp.freebsd.org
ftp.freebsd.org is an alias for ftp.geo.freebsd.org.
ftp.geo.freebsd.org has address 139.178.72.202
ftp.geo.freebsd.org has address 213.138.116.78
ftp.geo.freebsd.org has address 139.178.72.202
ftp.geo.freebsd.org has IPv6 address 2604:1380:2000:9501::15:0
ftp.geo.freebsd.org has IPv6 address 2001:41c8:112:8300::15:0
ftp.geo.freebsd.org has IPv6 address 2604:1380:2000:9501::15:0
ftp.geo.freebsd.org mail is handled by 0 .
host ftp.freebsd.org  0.00s user 0.01s system 1% cpu 0.501 total

% time host ftp.freebsd.org
ftp.freebsd.org is an alias for ftp.geo.freebsd.org.
ftp.geo.freebsd.org has address 139.178.72.202
ftp.geo.freebsd.org has address 213.138.116.78
ftp.geo.freebsd.org has address 139.178.72.202
ftp.geo.freebsd.org has IPv6 address 2604:1380:2000:9501::15:0
ftp.geo.freebsd.org has IPv6 address 2001:41c8:112:8300::15:0
ftp.geo.freebsd.org has IPv6 address 2604:1380:2000:9501::15:0
ftp.geo.freebsd.org mail is handled by 0 .
host ftp.freebsd.org  0.01s user 0.00s system 88% cpu 0.007 total

Yep. Works.

Blacklist

I have written a simple and short unbound-blacklist-fetch.sh to automate the process of generating up to date DNS blocked domains config.

It uses one unbound(8) source and several hosts(5) sources, then combines them in unbound(8) compatible format while removing the duplicated entries.

unbound-blacklist-script.256

We will now fetch it, put it under /root/bin directory (or use your favorite one), make it executable and start it.

# mkdir /root/bin

# fetch -o /root/bin/unbound-blacklist-fetch.sh \
> https://raw.githubusercontent.com/vermaden/scripts/master/unbound-blacklist-fetch.sh

# chmod +x /root/bin/unbound-blacklist-fetch.sh

# /root/bin/unbound-blacklist-fetch.sh

# ls -l /var/unbound/conf.d/blacklist.conf
-rw-r--r-- 1 root unbound 3003929 2020.11.16 00:23 /var/unbound/conf.d/blacklist.conf

# tail /var/unbound/conf.d/blacklist.conf
local-zone: "zyrtec.1.p2l.info" always_nxdomain
local-zone: "zyrtec.3.p2l.info" always_nxdomain
local-zone: "zyrtec.4.p2l.info" always_nxdomain
local-zone: "zyski-z-innowacji.pl" always_nxdomain
local-zone: "zytpirwai.net" always_nxdomain
local-zone: "zz.cqcounter.com" always_nxdomain
local-zone: "zzhc.vnet.cn" always_nxdomain
local-zone: "zzz.clickbank.net" always_nxdomain
local-zone: "zzz.onion.pet" always_nxdomain
local-zone: "zzzrtrcm2.com" always_nxdomain

The unbound(8) daemon already includes all /var/unbound/conf.d/*.conf files and we use that here.

You can change where the script generates blocked domains config under the # SETTINGS section directly in the script.

% grep -A 5 SETTINGS scripts/unbound-blacklist-fetch.sh 
# SETTINGS
FILE=/var/unbound/conf.d/blacklist.conf
TEMP=/tmp/unbound
TYPE=always_nxdomain
ECHO=0

After the /var/unbound/conf.d/blacklist.conf file is generated you can now restart the unbound(8) service.

# service local_unbound restart
Stopping local_unbound.
Waiting for PIDS: 87745.
Starting local_unbound.
Waiting for nameserver to start... good

We will also add that script to crontab(5) so it will fetch fresh information every day.

# cat << BSD >> /var/cron/tabs/root
> 
> # FETCH FRESH unbound(8) BLACKLIST
>   0 0 * * * /root/bin/unbound-blacklist-fetch.sh
> 
> BSD

# crontab -l | tail -4

# FETCH FRESH unbound(8) BLACKLIST
  0 0 * * * /root/bin/unbound-blacklist-fetch.sh

Test Blocked Domains

From 60000+ blocked domains I have chosen ad.track.us.org as target for verification.

% ping ad.track.us.org
ping: cannot resolve ad.track.us.org: Unknown host

% host ad.track.us.org
Host ad.track.us.org not found: 3(NXDOMAIN)

% dog ad.track.us.org
Status: NXDomain

% dog @1.1.1.1 ad.track.us.org
CNAME ad.track.us.org. 11m30s   "track.us.org."
    A track.us.org.     6m30s   185.59.208.177


unbound-test.256

As You can see the domain is successfully blocked.

The above blocking configuration does not mean that I will now disable the uBlock Origin plugin from Firefox but its a welcome addition to blocking unwanted information tools workshop.

UPDATE 1 – Reworked Script and Alternatives

After reading comments on Hacker News / Lobsters / Reddit I got a lot of good ideas how to improve my script even more.

Some people suggested that very similar functionality already exists in dns/void-zones-tools package on FreeBSD. One can also use get_unbound_adblock.sh script or lie-to-me solution.

There are also more sophisticated tools like Pi-hole which also include DHCP server and web interface for management and statistics. Unfortunately Pi-hole does not run on FreeBSD.

After reworking and adding additional sources to my unbound-blacklist-fetch.sh script its now twice the amount of blocked unwanted domains. In the first release about 60000 domains were blocked. Now its more then 120000.

Here is the distribution of data between various types of sources.

% wc -lc /tmp/unbound/lists-*
   54587 1059592 /tmp/unbound/lists-domains
  143553 4115745 /tmp/unbound/lists-hosts
   32867 1596409 /tmp/unbound/lists-unbound
  231007 6771746 total

Now the /var/unbound/conf.d/blacklist.conf before these changes.

% wc -l blacklist.conf
   60009 blacklist.conf

% ls -l /var/unbound/conf.d/blacklist.conf
-rw-r--r-- 1 root unbound 2907535 2020-11-20 00:00 /var/unbound/conf.d/blacklist.conf

… and after adding additional sources.

% wc -l blacklist.conf
  122190 blacklist.conf

% ls -l /var/unbound/conf.d/blacklist.conf
-rw-r--r-- 1 root unbound 6086623 2020-11-20 15:07 /var/unbound/conf.d/blacklist.conf

Here is also performance summary about which part takes what amount of time.

Combining various sources and generating the final config takes about 5 seconds.

Most of the time is spent in fetching the data from various sources.

UPDATE1.unbound.script.256

The script is already uploaded to the GitHub repo.

Just fetch it and enjoy πŸ™‚

UPDATE 2 – Huge Domains List Version

Thanks to Luca Castagnini from bsd.network who pointed me to https://oisd.nl/ site with HUGE list of domains that can/could/should be blocked I made another variant (or version) of the script unbound-blacklist-fetch-huge.sh with a total of 145 (!) various sources for domains to block.

It of course takes little longer to fetch and generate then the ‘casual’ version.

UPDATE2.unbound.time

Its little less then 2 minutes to fetch and generate new config while the longest part is the fetching of those 145 sources. Generation takes about 15 seconds.

These 145 sources provide more then a million domains to block.

% wc -l /tmp/unbound/* 
 551704 lists-domains
 439505 lists-hosts
  60835 lists-unbound
1052044 total

The script after removing duplicated entries makes little more then 480000 domains of it.

% wc -l /var/unbound/conf.d/blacklist.conf 
 484829 /var/unbound/conf.d/blacklist.conf

Unfortunately it comes at a price. In this HUGE variant with domains from 145 sources the unbound(8) server now uses about 150 MB of RAM.

% top -b -o res|grep -E 'RES|unbound'
  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C    TIME    WCPU COMMAND
75849 unbound       1  20    0   158M   149M select   4    0:03   0.00% local-unbound

I leave up to you which version to use and which sources to choose for blocking, but as my Firefox with about 20 tabs opened takes little more then 4226 MB of RAM these additional 150 MB from unbound(8) does not hurt that much πŸ™‚

% ./FIREFOX.RAM.sh
4226 MB

% cat FIREFOX.RAM.sh 
#! /bin/sh

SUM=0

top -b -o res \
  | sed 1,10d \
  | grep firefox \
  | awk '{print $7}' \
  | tr -cd '0-9\n' \
  | while read I
    do
      SUM=$(( ${SUM} + ${I} ))
      echo ${SUM}
    done | tail -1 | tr -d '\n'
echo " MB"
One more thing related to Firefox. After checking ‘free’ memory with Firefox running and after closing it the difference was about 2.6 GB which means that above script to calculate Firefox memory usage is not a lot accurate πŸ™‚
EOF

FreeBSD Cluster with Pacemaker and Corosync

I always missed ‘proper’ cluster software for FreeBSD systems. Recently I got to run several Pacemaker/Corosync based clusters on Linux systems. I thought how to make similar high availability solutions on FreeBSD and I was really shocked when I figured out that both Pacemaker and Corosync tools are available in the FreeBSD Ports and packages as net/pacemaker2 and net/corosync2 respectively.

In this article I will check how well Pacemaker and Corosync cluster works on FreeBSD.

pacemaker

There are many definitions of a cluster. One that I like the most is that a cluster is a system that is still redundant after losing one of its nodes (is still a cluster). This means that 3 nodes is a minimum for a cluster by that definition. The two node clusters are quite problematic because of their biggest exposure to the split brain problem. That is why often in the two node clusters additional devices or systems are added to make sure that this split brain does not happen. For example one can add third node without any resources or services just as a ‘witness’ role. Other way is to add a shared disk resource that will serve the same purpose and often its a raw volume with SCSI-3 Persistent Reservation mechanism used.

Lab Setup

As usual it will be entirely VirtualBox based and it will consist of 3 hosts. To not create 3 same FreeBSD installations I used 12.1-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

Here is the list of the machines for the GlusterFS cluster:

  • 10.0.10.111 node1
  • 10.0.10.112 node2
  • 10.0.10.113 node3

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

machine

Here is the configuration of the NAT Network on VirtualBox.

nat-network-01

nat-network-02

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration inside each VM. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for node1 host.

root@node1:~ # cat /etc/rc.conf
hostname=node1
ifconfig_em0="inet 10.0.10.111/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

root@node1:~ # grep PermitRootLogin /etc/ssh/sshd_config
PermitRootLogin yes

root@node1:~ # service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the node1 machine will be available on port 2211, the node2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

nat-network-03-sockstat

nat-network-04-ssh

To connect to such machine from the VirtualBox host system you will need this command:

vboxhost % ssh -l root localhost -p 2211

Packages

As we now have ssh(1) connectivity we need to add needed packages. To make our VMs resolve DNS queries we need to add one last thing. We will also switch to ‘quarterly’ branch of the pkg(8) packages.

root@node1:~ # echo 'nameserver 1.1.1.1' > /etc/resolv.conf
root@node1:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

Remember to repeat these two upper commands on node2 and node3 systems.

Now we will add Pacemaker and Corosync packages.

root@node1:~ # pkg install pacemaker2 corosync2 crmsh

root@node2:~ # pkg install pacemaker2 corosync2 crmsh

root@node3:~ # pkg install pacemaker2 corosync2 crmsh

These are messages both from pacemaker2 and corosync2 that we need to address.

Message from pacemaker2-2.0.4:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

======================================================================

Message from corosync2-2.4.5_1:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

We need to change the kern.ipc.maxsockbuf parameter. Lets do it then.

root@node1:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node1:~ # service sysctl restart

root@node2:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node2:~ # service sysctl restart

root@node3:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node3:~ # service sysctl restart

Lets check what binaries come with these packages.

root@node1:~ # pkg info -l pacemaker2 | grep bin
        /usr/local/sbin/attrd_updater
        /usr/local/sbin/cibadmin
        /usr/local/sbin/crm_attribute
        /usr/local/sbin/crm_diff
        /usr/local/sbin/crm_error
        /usr/local/sbin/crm_failcount
        /usr/local/sbin/crm_master
        /usr/local/sbin/crm_mon
        /usr/local/sbin/crm_node
        /usr/local/sbin/crm_report
        /usr/local/sbin/crm_resource
        /usr/local/sbin/crm_rule
        /usr/local/sbin/crm_shadow
        /usr/local/sbin/crm_simulate
        /usr/local/sbin/crm_standby
        /usr/local/sbin/crm_ticket
        /usr/local/sbin/crm_verify
        /usr/local/sbin/crmadmin
        /usr/local/sbin/fence_legacy
        /usr/local/sbin/iso8601
        /usr/local/sbin/pacemaker-remoted
        /usr/local/sbin/pacemaker_remoted
        /usr/local/sbin/pacemakerd
        /usr/local/sbin/stonith_admin

root@node1:~ # pkg info -l corosync2 | grep bin
        /usr/local/bin/corosync-blackbox
        /usr/local/sbin/corosync
        /usr/local/sbin/corosync-cfgtool
        /usr/local/sbin/corosync-cmapctl
        /usr/local/sbin/corosync-cpgtool
        /usr/local/sbin/corosync-keygen
        /usr/local/sbin/corosync-notifyd
        /usr/local/sbin/corosync-quorumtool

root@node1:~ # pkg info -l crmsh | grep bin
        /usr/local/bin/crm

Cluster Initialization

Now we will initialize our FreeBSD cluster.

First we need to make sure that names of the nodes are DNS resolvable.

root@node1:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node2:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node3:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3


Now we will generate the Corosync key.

root@node1:~ # corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /usr/local/etc/corosync/authkey.

root@node1:~ # echo $?
0

root@node1:~ # ls -l /usr/local/etc/corosync/authkey
-r--------  1 root  wheel  128 Sep  2 20:37 /usr/local/etc/corosync/authkey

Now the Corosync configuration file. For sure some examples were provided by the package maintainer.

root@node1:~ # pkg info -l corosync2 | grep example
        /usr/local/etc/corosync/corosync.conf.example
        /usr/local/etc/corosync/corosync.conf.example.udpu

We will take the second one as a base for our config.

root@node1:~ # cp /usr/local/etc/corosync/corosync.conf.example.udpu /usr/local/etc/corosync/corosync.conf

root@node1:~ # vi /usr/local/etc/corosync/corosync.conf
               /* LOTS OF EDITS HERE */

root@node1:~ # cat /usr/local/etc/corosync/corosync.conf

totem {
  version: 2
  crypto_cipher: aes256
  crypto_hash: sha256
  transport: udpu

  interface {
    ringnumber: 0
    bindnetaddr: 10.0.10.0
    mcastport: 5405
    ttl: 1
  }
}

logging {
  fileline: off
  to_logfile: yes
  to_syslog: no
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on

  logger_subsys {
    subsys: QUORUM
    debug: off
  }
}

nodelist {

  node {
    ring0_addr: 10.0.10.111
    nodeid: 1
  }

  node {
    ring0_addr: 10.0.10.112
    nodeid: 2
  }

  node {
    ring0_addr: 10.0.10.113
    nodeid: 3
  }

}

quorum {
  provider: corosync_votequorum
  expected_votes: 2
}

Now we need to propagate both Corosync key and config across the nodes in the cluster.

We can use some simple tools created exactly for that like net/csync2 cluster synchronization tool for example but plain old net/rsync will serve as well.

root@node1:~ # pkg install -y rsync

root@node1:~ # rsync -av /usr/local/etc/corosync/ node2:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2' (ECDSA) to the list of known hosts.
Password for root@node2:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

root@node1:~ # rsync -av /usr/local/etc/corosync/ node3:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3' (ECDSA) to the list of known hosts.
Password for root@node3:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

Now lets check that they are the same.

root@node1:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node2:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node3:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

Same.

We can now add corosync_enable=YES and pacemaker_enable=YES to the /etc/rc.conf file.

root@node1:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node1:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node2:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node2:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node3:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node3:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

Lets start these services then.

root@node1:~ # service corosync start
Starting corosync.
Sep 02 20:55:35 notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 warning [MAIN  ] Please migrate config file to nodelist.

root@node1:~ # ps aux | grep corosync
root  1695   0.0  7.9 38340 38516  -  S    20:55    0:00.40 /usr/local/sbin/corosync
root  1699   0.0  0.1   524   336  0  R+   20:57    0:00.00 grep corosync

Do the same on the node2 and node3 systems.

The Pacemaker is not yet running so that will fail.

root@node1:~ # crm status
Could not connect to the CIB: Socket is not connected
crm_mon: Error: cluster is not available on this node
ERROR: status: crm_mon (rc=102): 

We will start it now.

root@node1:~ # service pacemaker start
Starting pacemaker.

root@node2:~ # service pacemaker start
Starting pacemaker.

root@node3:~ # service pacemaker start
Starting pacemaker.

You need to give it little time to start because if you will execute crm status command right away you will get 0 nodes configured message as shown below.

root@node1:~ # crm status
Cluster Summary:
  * Stack: unknown
  * Current DC: NONE
  * Last updated: Wed Sep  2 20:58:51 2020
  * Last change:  
  * 0 nodes configured
  * 0 resource instances configured


Full List of Resources:
  * No resources

… but after a while everything is detected and works as desired.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 21:02:49 2020
  * Last change:  Wed Sep  2 20:59:00 2020 by hacluster via crmd on node2
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

The Pacemaker runs properly.

root@node1:~ # ps aux | grep pacemaker
root      1716   0.0  0.5 10844   2396  -  Is   20:58     0:00.00 daemon: /usr/local/sbin/pacemakerd[1717] (daemon)
root      1717   0.0  5.2 49264  25284  -  S    20:58     0:00.27 /usr/local/sbin/pacemakerd
hacluster 1718   0.0  6.1 48736  29708  -  Ss   20:58     0:00.75 /usr/local/libexec/pacemaker/pacemaker-based
root      1719   0.0  4.5 40628  21984  -  Ss   20:58     0:00.28 /usr/local/libexec/pacemaker/pacemaker-fenced
root      1720   0.0  2.8 25204  13688  -  Ss   20:58     0:00.20 /usr/local/libexec/pacemaker/pacemaker-execd
hacluster 1721   0.0  3.9 38148  19100  -  Ss   20:58     0:00.25 /usr/local/libexec/pacemaker/pacemaker-attrd
hacluster 1722   0.0  2.9 25460  13864  -  Ss   20:58     0:00.17 /usr/local/libexec/pacemaker/pacemaker-schedulerd
hacluster 1723   0.0  5.4 49304  26300  -  Ss   20:58     0:00.41 /usr/local/libexec/pacemaker/pacemaker-controld
root      1889   0.0  0.6 11348   2728  0  S+   21:56     0:00.00 grep pacemaker

We can check how Corosync sees its members.

root@node1:~ # corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(10.0.10.111) 
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(10.0.10.112) 
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(10.0.10.113) 
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined

… or the quorum information.

root@node1:~ # corosync-quorumtool
Quorum information
------------------
Date:             Wed Sep  2 21:00:38 2020
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          1
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
         1          1 10.0.10.111 (local)
         2          1 10.0.10.112
         3          1 10.0.10.113

The Corosync log file is filled with the following information.

root@node1:~ # cat /var/log/cluster/corosync.log
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 [1694] node1 corosync info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] Please migrate config file to nodelist.
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] The network interface [10.0.10.111] is now up.
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cmap
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cfg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cpg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Using quorum provider corosync_votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: quorum
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.111}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.112}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.113}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:4) was formed. Members joined: 1
Sep 02 20:55:35 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Members[1]: 1
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:8) was formed. Members joined: 2
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] This node is within the primary component and will provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] Members[2]: 1 2
Sep 02 20:58:14 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:19 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:12) was formed. Members joined: 3
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync notice  [QUORUM] Members[3]: 1 2 3
Sep 02 20:58:19 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.

Here is the configuration.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync

As we will not be configuring the STONITH mechanism we will disable it.

root@node1:~ # crm configure property stonith-enabled=false

New configuraion with STONITH disabled.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

The STONITH configuration is out of scope of this article but properly configured STONITH looks like that.

stonith

First Service

We will now configure our first highly available service – a classic – a floating IP address πŸ™‚

root@node1:~ # crm configure primitive IP ocf:heartbeat:IPaddr2 params ip=10.0.10.200 cidr_netmask="24" op monitor interval="30s"

Lets check how it behaves.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
primitive IP IPaddr2 \
        params ip=10.0.10.200 cidr_netmask=24 \
        op monitor interval=30s
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

Looks good – lets check the cluster status.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:03:35 2020
  * Last change:  Wed Sep  2 22:02:53 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::heartbeat:IPaddr2):        Stopped

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=132ms
  * IP_monitor_0 on node2 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:54Z', queued=0ms, exec=120ms
  * IP_monitor_0 on node1 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=110ms

Crap. Linuxism. The ip(8) command is expected to be present in the system. This is FreeBSD and as any UNIX system it comes with ifconfig(8) command instead.

We will have to figure something else. For now we will delete our useless IP service.

root@node1:~ # crm configure delete IP

Status after deletion.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:04:34 2020
  * Last change:  Wed Sep  2 22:04:31 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

Custom Resource

Lets check what resources are available by stock Pacemaker installation.

root@node1:~ # ls -l /usr/local/lib/ocf/resource.d/pacemaker
total 144
-r-xr-xr-x  1 root  wheel   7484 Aug 29 01:22 ClusterMon
-r-xr-xr-x  1 root  wheel   9432 Aug 29 01:22 Dummy
-r-xr-xr-x  1 root  wheel   5256 Aug 29 01:22 HealthCPU
-r-xr-xr-x  1 root  wheel   5342 Aug 29 01:22 HealthIOWait
-r-xr-xr-x  1 root  wheel   9450 Aug 29 01:22 HealthSMART
-r-xr-xr-x  1 root  wheel   6186 Aug 29 01:22 Stateful
-r-xr-xr-x  1 root  wheel  11370 Aug 29 01:22 SysInfo
-r-xr-xr-x  1 root  wheel   5856 Aug 29 01:22 SystemHealth
-r-xr-xr-x  1 root  wheel   7382 Aug 29 01:22 attribute
-r-xr-xr-x  1 root  wheel   7854 Aug 29 01:22 controld
-r-xr-xr-x  1 root  wheel  16134 Aug 29 01:22 ifspeed
-r-xr-xr-x  1 root  wheel  11040 Aug 29 01:22 o2cb
-r-xr-xr-x  1 root  wheel  11696 Aug 29 01:22 ping
-r-xr-xr-x  1 root  wheel   6356 Aug 29 01:22 pingd
-r-xr-xr-x  1 root  wheel   3702 Aug 29 01:22 remote

Not many … we will try to modify the Dummy service into an IP changer on FreeBSD.

root@node1:~ # cp /usr/local/lib/ocf/resource.d/pacemaker/Dummy /usr/local/lib/ocf/resource.d/pacemaker/ifconfig

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
               /* LOTS OF TYPING */

Because of the WordPress blogging system limitations I am forced to post this ifconfig resource as an image … but fear not – the text version is also available here – ifconfig.odt – for download.

Also the first version did not went that well …

root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* rc=3: Your agent has too restrictive permissions: should be 755
-:1: parser error : Start tag expected, '<' not found
usage: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig {start|stop|monitor}
^
* rc=1: Your agent produces meta-data which does not conform to ra-api-1.dtd
* rc=3: Your agent does not support the meta-data action
* rc=3: Your agent does not support the validate-all action
* rc=0: Monitoring a stopped resource should return 7
* rc=0: The initial probe for a stopped resource should return 7 or 5 even if all binaries are missing
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* Your agent does not support the reload action (optional)
Tests failed: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig failed 9 tests

But after adding 755 mode to it and making several (hundred) changes it become usable.

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
             /* LOTS OF NERVOUS TYPING */
root@node1:~ # chmod 755 /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* Your agent does not support the reload action (optional)
/usr/local/lib/ocf/resource.d/pacemaker/ifconfig passed all tests

Looks usable.

The ifconfig resource. Its pretty limited and with hardcoded IP address as for now.

ifconfig

Lets try to add new IP resource to our FreeBSD cluster.

Tests

root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Added.

Lets see what status command now shows.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:44:52 2020
  * Last change:  Wed Sep  2 22:44:44 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:52Z', queued=0ms, exec=5ms
  * IP_monitor_0 on node2 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:53Z', queued=0ms, exec=2ms

Crap. I forgot to copy this new ifconfig resource to the other nodes. Lets fix that now.

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node2:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node2:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node3:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node3:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

Lets stop, delete and re-add our precious resource now.

root@node1:~ # crm resource stop IP
root@node1:~ # crm configure delete IP
root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Fingers crossed.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:45:46 2020
  * Last change:  Wed Sep  2 22:45:43 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Looks like running properly.

Lets verify that its really up where it should be.

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node2:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:80:50:05
        inet 10.0.10.112 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Seems to be working.

Now lets try to move it to the other node in the cluster.

root@node1:~ # crm resource move IP node3
INFO: Move constraint created for IP to node3

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:47:31 2020
  * Last change:  Wed Sep  2 22:47:28 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

Switched properly to node3 system.

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Now we will poweroff the node3 system to check it that IP is really highly available.

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:49:57 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

root@node3:~ # poweroff

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:50:16 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 ]
  * OFFLINE: [ node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Seems that failover went well.

The crm command also colors various sections of its output.

failover

Good to know that Pacemaker and Corosync cluster runs well on FreeBSD.

Some work is needed to write the needed resource files but one with some time and determination can surely put FreeBSD into a very capable highly available cluster.

EOF

Run broot on FreeBSD

The broot file manager is quite fresh and nice approach to files and directories filtering/searching/view/manipulation/… and whatever else you call messing with files πŸ™‚

The broot tools is not yet available on the FreeBSD systems (as package or port).

This guide will show you how to built and install it on your FreeBSD system.

Here is how it looks in action.

Filter for jails.

broot-filter-jails.jpg

Filter for zfs.

broot-filter-zfs.jpg

It has ‘size mode’ when started with -s option similar to ncdu(1) tool.

broot-filter-size.jpg

You can also check the Feature Showcase section on their GitHub page – https://github.com/Canop/broot – available here.

Build

There are three steps to make it happen.

1. You need to install the rust package.

# pkg install rust

Then you need to type (as regular user) the cargo install broot command.

% cargo install broot

It will fail here:

broot-fail.jpg

You will need to apply this patch below:

% diff -u \
  /home/vermaden/.cargo/registry/src/github.com-1ecc6299db9ec823/crossterm-0.14.1/src/terminal/sys/unix.rs.ORG \
  /home/vermaden/.cargo/registry/src/github.com-1ecc6299db9ec823/crossterm-0.14.1/src/terminal/sys/unix.rs
--- /home/vermaden/.cargo/registry/src/github.com-1ecc6299db9ec823/crossterm-0.14.1/src/terminal/sys/unix.rs.ORG  2020-01-10 23:41:29.825912000 +0100
+++ /home/vermaden/.cargo/registry/src/github.com-1ecc6299db9ec823/crossterm-0.14.1/src/terminal/sys/unix.rs      2020-01-10 23:41:07.703471000 +0100
@@ -33,7 +33,7 @@
         ws_ypixel: 0,
     };
 
-    if let Ok(true) = wrap_with_result(unsafe { ioctl(STDOUT_FILENO, TIOCGWINSZ, &mut size) }) {
+    if let Ok(true) = wrap_with_result(unsafe { ioctl(STDOUT_FILENO, TIOCGWINSZ.into(), &mut size) }) {
         Ok((size.ws_col, size.ws_row))
     } else {
         tput_size().ok_or_else(|| std::io::Error::last_os_error().into())

Then type cargo install broot command again. It will now properly compile.

% cargo install broot
    Updating crates.io index
  Downloaded broot v0.11.6
  Downloaded 1 crate (1.6 MB) in 2.89s
  Installing broot v0.11.6
   Compiling libc v0.2.66
   Compiling cfg-if v0.1.10
   Compiling lazy_static v1.4.0
   Compiling autocfg v0.1.7
   Compiling semver-parser v0.7.0
   Compiling autocfg v1.0.0
   Compiling proc-macro2 v1.0.7
   Compiling log v0.4.8
   Compiling scopeguard v1.0.0
   Compiling unicode-xid v0.2.0
   Compiling bitflags v1.2.1
   Compiling syn v1.0.13
   Compiling memchr v2.2.1
   Compiling arc-swap v0.4.4
   Compiling slab v0.4.2
   Compiling smallvec v1.1.0
   Compiling serde v1.0.104
   Compiling unicode-width v0.1.7
   Compiling regex-syntax v0.6.13
   Compiling ansi_term v0.11.0
   Compiling strsim v0.8.0
   Compiling vec_map v0.8.1
   Compiling id-arena v2.2.1
   Compiling custom_error v1.7.1
   Compiling glob v0.3.0
   Compiling open v1.3.2
   Compiling umask v0.1.8
   Compiling thread_local v1.0.0
   Compiling minimad v0.6.3
   Compiling lazy-regex v0.1.2
   Compiling semver v0.9.0
   Compiling lock_api v0.3.3
   Compiling crossbeam-utils v0.7.0
   Compiling crossbeam-epoch v0.8.0
   Compiling num-traits v0.2.11
   Compiling num-integer v0.1.42
   Compiling textwrap v0.11.0
   Compiling rustc_version v0.2.3
   Compiling memoffset v0.5.3
   Compiling iovec v0.1.4
   Compiling net2 v0.2.33
   Compiling dirs-sys v0.3.4
   Compiling parking_lot_core v0.7.0
   Compiling signal-hook-registry v1.2.0
   Compiling time v0.1.42
   Compiling atty v0.2.14
   Compiling users v0.9.1
   Compiling quote v1.0.2
   Compiling aho-corasick v0.7.6
   Compiling mio v0.6.21
   Compiling dirs v2.0.2
   Compiling directories v2.0.2
   Compiling parking_lot v0.10.0
   Compiling clap v2.33.0
   Compiling crossbeam-queue v0.2.1
   Compiling crossbeam-channel v0.4.0
   Compiling toml v0.5.5
   Compiling term v0.6.1
   Compiling regex v1.3.3
   Compiling signal-hook v0.1.12
   Compiling chrono v0.4.10
   Compiling crossterm v0.14.1
   Compiling simplelog v0.7.4
   Compiling crossbeam-deque v0.7.2
   Compiling thiserror-impl v1.0.9
   Compiling crossbeam v0.7.3
   Compiling thiserror v1.0.9
   Compiling termimad v0.8.9
   Compiling broot v0.11.6
    Finished release [optimized] target(s) in 4m 56s
  Installing /home/vermaden/.cargo/bin/broot
   Installed package `broot v0.11.6` (executable `broot`)
warning: be sure to add `/home/vermaden/.cargo/bin` to your PATH to be able to run the installed binaries

% echo $?
0

Install

Now go to the ~/.cargo/bin directory and copy the broot binary to some place that is set in your ${PATH} variable.

Then start new terminal (updated ${PATH} variable) and type broot command.

% cp ~/.cargo/bin/broot ~/scripts
% rehash
% broot

You will be asked if automatic setup of the br function should tool place. I agreed with y answer.

broot-first-run.jpg

Here are things generated by this process.

% find ~/.config/broot
/home/vermaden/.config/broot
/home/vermaden/.config/broot/conf.toml
/home/vermaden/.config/broot/launcher
/home/vermaden/.config/broot/launcher/installed-v1
/home/vermaden/.config/broot/launcher/bash
/home/vermaden/.config/broot/launcher/bash/br

% find ~/.local/share/broot
/home/vermaden/.local/share/broot
/home/vermaden/.local/share/broot/launcher
/home/vermaden/.local/share/broot/launcher/fish
/home/vermaden/.local/share/broot/launcher/fish/1.fish
/home/vermaden/.local/share/broot/launcher/bash
/home/vermaden/.local/share/broot/launcher/bash/1

As I use ZSH shell it also updates my ~/.zshrc file.

% tail -3 ~/.zshrc

source /home/vermaden/.config/broot/launcher/bash/br

Finished. You now have broot installed and ready to use.

broot-filter-bhyve.jpg

UPDATE 1 – Now No Patches Are Needed

Thanks to the broot author any patches are now not needed.

It builds and works out of the box.

broot-update-fixed

UPDATE 2 – Its in Ports/Packages Now

The broot file manager is now available via usual FreeBSD Ports and packages which makes this guide pointless πŸ™‚

Its available as misc/broot port.

EOF

Β 

List Block Devices on FreeBSD lsblk(8) Style

When I have to work on Linux systems I usually miss many nice FreeBSD tools such as these for example to name the few:

  • sockstat
  • gstat
  • top -b -o res
  • top -m io -o total
  • usbconfig
  • rcorder
  • beadm/bectl
  • idprio/rtprio

… but sometimes – which rarely happens – Linux has some very useful tool that is not available on FreeBSD. An example of such tool is lsblk(8) that does one thing and does it quite well – lists block devices and their contents. It has some problems like listing a disk that is entirely used under ZFS pool on which lsblk(8) displays two partitions instead of information about ZFS just being there – but we all know how much in some circles the CDDL licensed ZFS is unloved in that GPL world.

Example lsblk(8) output from Linux system:

$ lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sr0                           11:0    1  1024M  0 rom
sda                            8:0    0 931.5G  0 disk
|-sda1                         8:1    0   500M  0 part   /boot
`-sda2                         8:2    0   931G  0 part
  |-vg_local-lv_root (dm-0)  253:0    0    50G  0 lvm    /
  |-vg_local-lv_swap (dm-1)  253:1    0  17.7G  0 lvm    [SWAP]
  `-vg_local-lv_home (dm-2)  253:2    0   1.8T  0 lvm    /home
sdc                            8:32   0 232.9G  0 disk
`-sdc1                         8:33   0 232.9G  0 part
  `-md1                        9:1    0 232.9G  0 raid10 /data
sdd                            8:48   0 232.9G  0 disk
`-sdd1                         8:49   0 232.9G  0 part
  `-md1                        9:1    0 232.9G  0 raid10 /data

What FreeBSD offers in this department? The camcontrol(8) and geom(8) commands are available. You can also use gpart(8) command to list partitions. Below you will find output of these commands from my single disk laptop. Please note that because of WordPress limitations I need to change all > < characters to ] [ ones in the commands outputs.

# camcontrol devlist
[Samsung SSD 860 EVO mSATA 1TB RVT41B6Q]  at scbus1 target 0 lun 0 (ada0,pass0)

% geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e2
   descr: Samsung SSD 860 EVO mSATA 1TB
   lunid: 5002538e402b4ddd
   ident: S41PNB0K303632D
   rotationrate: 0
   fwsectors: 63
   fwheads: 1

# gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40      409600     1  efi  (200M)
      409640        1024     2  freebsd-boot  (512K)
      410664         984        - free -  (492K)
      411648  1953112064     3  freebsd-zfs  (931G)
  1953523712        1416        - free -  (708K)

They provide needed information in acceptable manner but only on systems with small amount of disks. What if you would like to display a summary of all system drives contents? This is where lsblk.sh comes handy. While lsblk(8) has many interesting features like --perms/--scsi/--inverse modes I focused to provide only the basic feature – to list the system block devices and their contents. As I have long and pleasing experience with writing shell scripts such as sysutils/beadm or sysutils/automount I though that writing lsblk.sh may be a good idea. I actually ‘open-sourced’ or should I say shared that project/idea in 2016 in this thread lsblk(8) Command for FreeBSD on FreeBSD Forums but lack of time really slowed that ‘side project’ development pace. I finally got back to it to finish it.

The lsblk.sh is generally small and simple shell script which tales less then 400 SLOC.

lsblk

Here is example output of lsblk.sh command from my single disk laptop.

% lsblk.sh
DEVICE         MAJ:MIN  SIZE TYPE                      LABEL MOUNT
ada0             0:5b  932G GPT                           - -
  ada0p1         0:64  200M efi                    efiboot0 [UNMOUNTED]
  ada0p2         0:65  512K freebsd-boot           gptboot0 -
  [FREE]         -:-   492K -                             - -
  ada0p3         0:66  931G freebsd-zfs                zfs0 [ZFS]
  [FREE]         -:-   708K -                             - -


Same output in graphical window.

lolcat

Below you will find an example lsblk.sh output from server with two system SSD drives (da0/da1) and two HDD data drives (da2/da3).

# lsblk.sh
DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
da0              0:be  224G GPT                           - -
  da0p1          0:15a 200M efi                    efiboot0 [UNMOUNTED]
  da0p2          0:15b 512K freebsd-boot           gptboot0 -
  [FREE]         -:-   492K -                             - -
  da0p3          0:15c 2.0G freebsd-swap              swap0 [UNMOUNTED]
  da0p4          0:15d 221G freebsd-zfs                zfs0 [ZFS]
  [FREE]         -:-   580K -                             - -
da1              0:bf  224G GPT                           - -
  da1p1          0:16a 200M efi                    efiboot1 [UNMOUNTED]
  da1p2          0:16b 512K freebsd-boot           gptboot1 -
  [FREE]         -:-   492K -                             - -
  da1p3          0:16c 2.0G freebsd-swap              swap1 [UNMOUNTED]
  da1p4          0:16d 221G freebsd-zfs                zfs1 [ZFS]
  [FREE]         -:-   580K -                             - -
da2              0:c0   11T GPT                           - -
  da2p1          0:16e  11T freebsd-zfs                   - [ZFS]
  [FREE]         -:-   1.0G -                             - -
da3              0:c1   11T GPT                           - -
  da3p1          0:16f  11T freebsd-zfs                   - [ZFS]
  [FREE]         -:-   1.0G -                             - -

Below you will find other examples from other systems I have tested lsblk.sh on.

lsblk.examples

While lsblk.sh is not the fastest script on Earth (because of all the needed parsing) it does its job quite well. If you would like to install it in your system just type the command below:

# fetch -o /usr/local/bin/lsblk https://raw.githubusercontent.com/vermaden/scripts/master/lsblk.sh
# chmod +x /usr/local/bin/lsblk
# hash -r || rehash
# lsblk

If I got time which other original Linux lsblk(8) subcommand/option/argument is worth adding to the lsblk.sh script? πŸ™‚

Regards.

UPDATE 1 – Added USAGE/HELP Information

Just added some usage information that can be displayed by specifying one of these as argument:

  • h
  • -h
  • --h
  • help
  • -help
  • --help

IMHO writing man page for such simple utility is needless. I think I will create dedicated man page when lsblk.sh tool will grow in size and options to comparable with the Linux lsblk(8) equivalent. Here is how it looks.

# lsblk.sh --help
usage:

  BASIC USAGE INFORMATION
  =======================
  # lsblk.sh [DISK]

example(s):

  LIST ALL BLOCK DEVICES IN SYSTEM
  --------------------------------
  # lsblk.sh
  DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
  ada0             0:5b  932G GPT                           - -
    ada0p1         0:64  200M efi                    efiboot0 [UNMOUNTED]
    ada0p2         0:65  512K freebsd-boot           gptboot0 -
    [FREE]         -:-   492K -                             - -
    ada0p3         0:66  931G freebsd-zfs                zfs0 [ZFS]

  LIST ONLY da1 BLOCK DEVICE
  --------------------------
  # lsblk.sh da1
  DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
  da1              0:80  2.0G MBR                           - -
    da1s1          0:80  2.0G freebsd                       - -
      da1s1a       0:81  1.0G freebsd-ufs                root /
      da1s1b       0:82  1.0G freebsd-swap               swap SWAP

hint(s):

  DISPLAY ALL DISKS IN SYSTEM
  ---------------------------
  # sysctl kern.disks
  kern.disks: ada0 da0 da1

Regards.

UPDATE 2 – Code Reorganization and 75% Rewrite

… at least this is what git(1) tries to tell me after commit message.

% git commit (...)
[master 12fd4aa] Rework entire flow. Split code into functions. Add many useful comments. In other words its 2.0 version.
 1 file changed, 494 insertions(+), 505 deletions(-)
 rewrite lsblk.sh (75%)

After several productive hours new incarnation of lsblk.sh is now available.

It has similar SLOC but its now smaller by a quarter … while doing more and with better accuracy. Great example why “less is more.”

% wc scripts/lsblk.sh.OLD
     491    2201   19721 scripts/lsblk.sh.OLD

% wc scripts/lsblk.sh
     494    1871   15472 scripts/lsblk.sh

Things that does not have simple solution are described below.

One of them is ‘double’ label for FAT filesystems. We have both /dev/gpt/efiboot0 label and FAT label is named EFISYS. We have to choose something here. As not all FAT filesystems have label I have chosen the GPT label.

% glabel status | grep ada0p1
  gpt/efiboot0     N/A  ada0p1
msdosfs/EFISYS     N/A  ada0p1

I was also not able to cover FUSE mounts. When you mount – for example – the /dev/da0 device as NTFS (with ntfs-3g) or exFAT (with mount.exfat) there is no visible difference in mount(8) output.

% mount -t fusefs
/dev/fuse on /mnt/ntfs (fusefs)
/dev/fuse on /mnt/exfat (fusefs)

When I mount such filesystem by my daemon (like sysutils/automount) I keep track of what device have been mounted to which directory in the /var/run/automount.state file. Then when I get the detach event for /dev/da0 device I know what to u(n)mount … but when I only have /dev/fuse device its just not possible.

… or maybe YOU know any way of extracting information from /dev/fuse (or generally from FUSE) what device is mounted where?

Now little presentation after update.

Here are various non ZFS filesystems mounted.

% mount -t nozfs
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
/dev/label/ASD on /mnt/tmp (msdosfs, local)
/dev/fuse on /mnt/ntfs (fusefs)
/dev/md0s1f on /mnt/ufs.other (ufs, local)
/dev/gpt/OTHER on /mnt/fat.other (msdosfs, local)
/dev/md0s1a on /mnt/ufs (ufs, local)

… and here is how now lsblk.sh displays them.

% lsblk.sh
DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
ada0             0:56  932G GPT                           - -
  ada0p1         0:64  200M efi                gpt/efiboot0 -
  ada0p2         0:65  512K freebsd-boot       gpt/gptboot0 -
  [FREE]         -:-   492K -                             - -
  ada0p3         0:66  931G freebsd-zfs                   - [ZFS]
  [FREE]         -:-   708K -                             - -
md0              0:28f 1.0G MBR                           - -
  md0s1          0:294 512M freebsd                       - -
    md0s1a       0:29a 100M freebsd-ufs                root /mnt/ufs
    md0s1b       0:29b  32M freebsd-swap         label/swap SWAP
    md0s1e       0:29c  64M freebsd-ufs                   - -
    md0s1f       0:29d 316M freebsd-ufs                   - /mnt/ufs.other
  md0s2          0:296 256M ntfs                          - -
  md0s3          0:297 256M fat32               msdosfs/ONE -
md1              0:2a4 1.0G msdosfs                   LARGE 
md2              0:298 2.0G GPT                           - -
  md2p1          0:29f 2.0G ms-basic-data         gpt/OTHER /mnt/fat.other

I used some file based memory devices for this. Now by default lsblk.sh also displays memory disks contents.

% mdconfig.sh -l
md0     vnode    1024M  /home/vermaden/FILE     
md2     vnode    2048M  /home/vermaden/FILE.GPT 
md1     vnode    1024M  /home/vermaden/FILER    

Here is how it looks in the xterm(1) terminal.

lsblk.2.0

Regards.

UPDATE 3 – Added geli(8) Support

I thought that adding geli(8) support may be useful. The latest lsblk.sh now avoids code duplication for MOUNT and LABEL detection (moved into single unified function). Also added more comments for code readability and some minor fixes … and its again smaller πŸ™‚

% wc lsblk.sh.1.0
     491    2201   19721 lsblk.sh.1.0

% wc lsblk.sh.2.0
     493    1861   15415 lsblk.sh.2.0

% wc lsblk.sh
     488    1820   15332 lsblk.sh

About 40% (according to git commit was changed this time (191 insertions and 196 deletions).

# git commit (...)
[master ec9985a] Add geli(8) support. Avoid code duplication and move MOUNT/LABEL detection into function. More comments. Minor fixes.
 1 file changed, 191 insertions(+), 196 deletions(-)

Also forgot to mention that now lsblk.sh thanks to smart optimizations (like not doing things twice and aggregating grep(1) | awk(1) pipes into single awk(1) queries) runs 3 times faster then the initial version πŸ™‚

New output with geli(8) support below.

lsblk.2.1.geli.png

Regards.

UPDATE 4 – Added fuse(8) Support

As I wrote in the UPDATE 2 keeping track of what is mounted and where under fuse(8) is very hard as all mounted devices magically become /dev/fuse after mount is done.

After little research I found that this information (what really is mounted where by using fuse(8) interface under FreeBSD) is available after mounting procfs filesystem under /proc. You just need to cat cmdline entry for all PIDs of ntfs-3g. Its not perfect but the information at least is available.

# mount -t procfs proc /proc

# ps ax | grep ntfs-3g
45995  -  Is      0:00.00 ntfs-3g /dev/md1s2 /mnt/ntfs
59607  -  Is      0:00.00 ntfs-3g /dev/md3 /mnt/ntfs.another
83323  -  Is      0:00.00 ntfs-3g /dev/md3 /mnt/ntfs.another

# pgrep ntfs-3g
59607
83323
45995

% pgrep ntfs-3g | while read I; do cat /proc/$I/cmdline; echo; done
ntfs-3g/dev/md3/mnt/ntfs.another
ntfs-3g/dev/md3/mnt/ntfs.another
ntfs-3g/dev/md1s2/mnt/ntfs

This was the code prototype that worked for fuse(8) mountpoints detection.

    if [ -e /proc/0/status ]
    then
      FUSE_MOUNTS=$(
        while read PID
        do
          cat /proc/${PID}/cmdline
          echo
        done << ________EOF
          $( pgrep ntfs-3g )
________EOF
)
      FUSE_MOUNTS=$( echo "${FUSE_MOUNTS}" | sort -u )
      FUSE_MOUNTS=$( echo "${FUSE_MOUNTS}" | sed 's|ntfs-3g||g' )
      FUSE_CHECKS=$( echo "${FUSE_MOUNTS}" | grep /dev/${TARGET}/ )
      if [ "${FUSE_CHECKS}" != "" ]
      then
        MOUNT=$( echo "${FUSE_CHECKS}" | sed "s|/dev/${TARGET}||g" )
      fi
    fi
  fi

… and I have just realized that I found new (better) way of getting that information without mounting /proc filesystem – all you need to do is to display the ntfs-3g processes with their command line arguments, for example like that:

% ps -p $( pgrep ntfs-3g | tr '\n' ',' | sed '$s/.$//' ) -o command | sed 1d
ntfs-3g /dev/md1s2 /mnt/ntfs
ntfs-3g /dev/md3 /mnt/ntfs.another
ntfs-3g /dev/md3 /mnt/ntfs.another

So after I also thought that its only for NTFS (ntfs-3g(8) process) I also added exFAT support by searching for mount.exfat PIDs as well. The fuse(8) mount point detection works now for both NTFS and exFAT filesystems … and code to support it is even shorter.

  # TRY fuse(8) MOUNTS FROM PROCESSES
  if [ "${MOUNT_FOUND}" != "1" ]
  then
    FUSE_PIDS=$( pgrep mount.exfat ntfs-3g | tr '\n' ',' | sed '$s/.$//' )
    FUSE_MOUNTS=$( ps -p "${FUSE_PIDS}" -o command | sed 1d | sort -u )
    MOUNT=$( echo "${FUSE_MOUNTS}" |  grep "/dev/${TARGET} " | awk '{print $3}' )
  fi

I also changed how MAJOR and MINOR numbers are displayed – from HEX to DEC – as it is on Linux. The FreeBSD’s ls(1) from Base System displays these as HEX – for example you will get 0x2af value:

% ls -l /dev/md4
crw-rw----  1 root  operator  0x2af 2019.09.29 05:18 /dev/md4

But do the same with GNU equivalent by using gls(1) from FreeBSD Ports (from sysutils/coreutils package) and it shows MAJOR and MINOR in DEC values. The gls(1) is just ls(1) from the Linux world but as ls(1) name is already ‘taken’ by FreeBSD’s Base System tool the FreeBSD developers/maintainers add ‘g’ letter (for GNU) to distinguish them.

% gls -l /dev/md4
crw-rw---- 1 root 2, 175 2019-09-29 05:18 /dev/md4

… and they are also easier/faster to get with stat(1) tool.

  MAJ=$( stat -f "%Hr" /dev/${DEV} )
  MIN=$( stat -f "%Lr" /dev/${DEV} )

Latest lsblk.sh looks like that now.

lsblk.2.3.fuse.NTFS.exFAT

… that is why I did not (yet) added lsblk.sh to the FreeBSD Ports. Several new versions with important features span across just two days πŸ™‚

Regards.

UPDATE 5 – Another 69% Rewrite

After messing with gpart(8) more I found that using its -p flag which is a game changer. The difference is that with -p flag it displays names along partitions – its no longer needed to find the PREFIX and ‘create’ partition names.

Default gpart(8) output.

# gpart show md0
=>     63  2097089  md0  MBR  (1.0G)
       63  1048576    1  freebsd  (512M)
  1048639   524288    2  ntfs  (256M)
  1572927   524225    3  fat32  (256M)

Output of gpart(8) with -p flag.

# gpart show -p md0
=>     63  2097089    md0  MBR  (1.0G)
       63  1048576  md0s1  freebsd  (512M)
  1048639   524288  md0s2  ntfs  (256M)
  1572927   524225  md0s3  fat32  (256M)

That discovery implicated a quite large rewrite of lsblk.sh. The git commit estimates this as 69% code rewrite.

# git commit (...)
(...)
 1 file changed, 487 insertions(+), 501 deletions(-)
 rewrite lsblk.sh (69%)

The latest lsblk.sh has now these features:

  • Previous bugs fixed.
  • Detects exFAT labels.
  • Is now 20% faster.
  • Has less 10% SLOC.
  • Has less 15% of code.
  • Handles bsdlabel(8) on entire device properly.
  • Handles exFAT on entire device properly.

The difference in code is shown below.

# wc lsblk.sh
     487    1791   13705 lsblk.sh

# wc lsblk.sh.OLD
     544    1931   16170 lsblk.sh.OLD

Latest lsblk.sh looks as usual but I now use ‘-‘ instead of ‘[UNMOUNTED]‘ one.

lsblk.2.5.gpart.exfat

EOF

SMB/CIFS on FreeBSD

If you use FreeBSD/Illumos/Linux (or other UNIX/Unix-like system) there is big chance that you do not like – to say the least – the Windows world, but sometimes there is need to share some files with the Windows world. This is where Samba project comes handy. Today I would like to share minimalistic and simple Samba configuration and also a way to access SMB/CIFS shares from a FreeBSD machine.

samba_logo.png

On the naming side CIFS (Common Internet File System) is just particular version/dialect of the SMB (Server Message Block) protocol.

As usual I will use FreeBSD as a server. For the setup I used FreeBSD 12.0-RELEASE virtual machine image available from the project location:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

The main FreeBSD configuration file on the server can be as small and simple as the one bellow.

# cat /etc/rc.conf
hostname="samba"
ifconfig_em0="inet 10.0.10.40/24"
defaultrouter="10.0.10.1"
sshd_enable="YES"

You of course do not need SSH to server SMB/CIFS shares with Samba.

Serve SMB/CIFS Share on FreeBSD with Samba

There are several versions of Samba available on FreeBSD, but if you do not have exact reason to use the older version then just go ahead with the latest one.

# pkg search samba
p5-Samba-LDAP-0.05_2           Manage a Samba PDC with an LDAP Backend
p5-Samba-SIDhelper-0.0.0_3     Create SIDs based on G/UIDs
samba-nsupdate-9.13.3_1        nsupdate utility with GSS-TSIG support
samba46-4.6.16_1               Free SMB/CIFS and AD/DC server and client for Unix
samba47-4.7.12                 Free SMB/CIFS and AD/DC server and client for Unix
samba48-4.8.7                  Free SMB/CIFS and AD/DC server and client for Unix

First You will need to add Samba package.

# pkg install samba48

Then we need to create configuration file for Samba. I will assume here that you would like to share two things as examples. The /data directory with write permissions only to my vermaden user and also my home directory /home/vermaden with read permissions for me and all people on my vermaden group. The so called public read is disabled entirely. Only after passing user and password the access will be possible to these shares. I also added several performance related options. Below is the /usr/local/etc/smb4.conf configuration file.

# cat /usr/local/etc/smb4.conf
[global]
workgroup          = workgroup
netbios name       = smb
server string      = samba
security           = user
max smbd processes = 3
encrypt passwords  = yes
socket options     = TCP_NODELAY IPTOS_LOWDELAY IPTOS_THROUGHPUT SO_KEEPALIVE SO_RCVBUF=65536 SO_SNDBUF=65536
aio read size      = 16384
aio write size     = 16384
strict locking     = no
strict sync        = no

# DISABLE PRINTING
load printers           = no
disable spoolss         = yes
show add printer wizard = no

[data]
  path       = /data
  public     = no
  writable   = yes
  browsable  = no
  write list = vermaden

[vermaden]
  path       = /home/vermaden
  public     = no
  writable   = no
  browsable  = no
  write list = @vermaden

We will also need vermaden user, let’s create one with pw(8) command.

First the vermaden group with GID of 1000. The -N flag just shows what will be done instead of doing actual changes to the system. Let’s try that and then execute the command without the -N flag to actually add the group.

# pw groupadd -n vermaden -g 1000 -N
vermaden:*:1000:
# pw groupadd -n vermaden -g 1000
# pw groupshow vermaden
vermaden:*:1000:

As we have the group its time to create vermaden user with UID of 1000. Like with group let’s first try with -N flag to check what will be made.

# pw useradd -n vermaden -c '' -u 1000 -g 1000 -m -N
vermaden:*:1000:1000::0:0::/home/vermaden:/bin/sh
# pw useradd -n vermaden -c '' -u 1000 -g 1000 -m
# pw usershow vermaden
vermaden:*:1000:1000::0:0::/home/vermaden:/bin/sh

Let’s verify our vermaden user again.

# id vermaden
uid=1000(vermaden) gid=1000(vermaden) groups=1000(vermaden)
# su - vermaden
By pressing "Scroll Lock" you can use the arrow keys to scroll backward
through the console output.  Press "Scroll Lock" again to turn it off.
Don't have a "Scroll Lock" key? The "Pause / Break" key acts alike.

Now let’s create password for this new vermaden user.

# passwd vermaden
Changing local password for vermaden
New Password:
Retype New Password:

Now we need to add the vermaden user with pdbedit command from the Samba package.

# which pdbedit
/usr/local/bin/pdbedit

# pkg which `which pdbedit`
/usr/local/bin/pdbedit was installed by package samba48-4.8.7

# pdbedit -a -u vermaden
new password:
retype new password:
Unix username:        vermaden
NT username:
Account Flags:        [U          ]
User SID:             S-1-5-21-1751207453-560213463-1759912891-1000
Primary Group SID:    S-1-5-21-1751207453-560213463-1759912891-513
Full Name:
Home Directory:       \\smb\vermaden
HomeDir Drive:
Logon Script:
Profile Path:         \\smb\vermaden\profile
Domain:               SMB
Account desc:
Workstations:
Munged dial:
Logon time:           0
Logoff time:          9223372036854775807 seconds since the Epoch
Kickoff time:         9223372036854775807 seconds since the Epoch
Password last set:    Fri, 21 Dec 2018 16:49:29 UTC
Password can change:  Fri, 21 Dec 2018 16:49:29 UTC
Password must change: never
Last bad password   : 0
Bad password count  : 0
Logon hours         : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

To list all users with the pdbedit command use the -L argument.

# pdbedit -L
vermaden:1000:

We now need to add Samba to the FreeBSD system services automatic startup.

# sysrc samba_server_enable=YES
samba_server_enable:  -> YES

# sysrc samba_server_enable
samba_server_enable: YES

# cat /etc/rc.conf
hostname="samba"
ifconfig_em0="inet 10.0.10.40/24"
defaultrouter="10.0.10.1"
sshd_enable="YES"
samba_server_enable="YES"

Now we can start the Samba service.

# service samba_server start
Performing sanity check on Samba configuration: OK
Starting nmbd.
Starting smbd.

Let’s check which Samba daemons listen on which ports.

# sockstat -l -4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     smbd       599   33 tcp4   *:445                 *:*
root     smbd       599   34 tcp4   *:139                 *:*
root     nmbd       595   15 udp4   *:137                 *:*
root     nmbd       595   16 udp4   *:138                 *:*
(...)

Now let’s try to access the /data share from the Windows system.

Open explorer.exe on Windows machine and type //smb/data into location field and then type smb\vermaden as username.

bsd-share-01

You should be able to access the share now as shown below.

bsd-share-02

Let’s put some text into that test.txt file.

bsd-share-03.png

Let’s verify that it works on the FreeBSD side.

# cat /data/test.txt
Input from Windows.

So we are able to access/modify files from FreeBSD machine on the Windows world.

Access SMB/CIFS Share from FreeBSD

Let’s try the other way around.

By default there are several shares already served on Windows.

C:\>net share

Share name   Resource                        Remark

-------------------------------------------------------------------------------
C$           C:\                             Default share
IPC$                                         Remote IPC
ADMIN$       C:\Windows                      Remote Admin
Users        C:\Users
The command completed successfully.


C:\>

You can share a directory from Windows by using graphical interface as shown below.

win-share-01

… or by using CLI interface within cmd.exe interpreter with net commands.

win-share-02

win-share-03

win-share-04

The test share is now exported for vuser user with FULL access rights which means read/write in the Windows world.

Here are the same commands in text so you may copy/paste them as needed.

C:\Windows\system32>cd \

C:\>mkdir asd

C:\>net share test=C:\asd /grant:vuser,FULL
test was shared successfully.


C:\>net share

Share name   Resource                        Remark

-------------------------------------------------------------------------------
C$           C:\                             Default share
IPC$                                         Remote IPC
ADMIN$       C:\Windows                      Remote Admin
test         C:\asd
Users        C:\Users
The command completed successfully.


C:\>

Let’s try to mount it using the mount_smbfs command on FreeBSD system. The 10.0.10.4 address is the IP of the Windows machine.

# mount_smbfs -I 10.0.10.4 //vuser@vbox/test /mnt
Password:
#

# mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
//VUSER@VBOX/TEST on /mnt (smbfs)

It also works the other way.

After your job is done you may remove the test share also with net command as shown below.

win-share-05.png

… and also the same commands in text so you may copy/paste them as needed.

C:\>net share test /delete
test was deleted successfully.


C:\>net share

Share name   Resource                        Remark

-------------------------------------------------------------------------------
C$           C:\                             Default share
IPC$                                         Remote IPC
ADMIN$       C:\Windows                      Remote Admin
Users        C:\Users
The command completed successfully.


C:\>

This sentence concludes this article πŸ˜‰

UPDATE 1

The SMB/CIFS on FreeBSD article was featured in the BSD Now 279 – Future of ZFS episode.

Thanks for mentioning!

EOF

Β 

The Power to Serve – FreeBSD Power Management

This is the motto of the FreeBSD operating system – The Power to Serve – which also greatly fits for the topic of this article. Decade ago (yes time flies) I even made a wallpaper with this motto – still available on the DeviatArt page.

freebsd_the_power_to_serve_small.jpg

Time for FreeBSD article covering its power management features. It also applies to FreeBSD Desktop series but its not limited to it. Popular opinion seems to be that FreeBSD is so server oriented that it lacks any power management mechanisms. Nothing more far from the truth. While less important on the desktop (but will still lower your electricity bill) or servers it is desirable to properly configure power management on laptops to so they will have longer battery life and will run more quiet.

I write this as the FreeBSD Handbook does not cover all that information in the 11.13. Power and Resource Management chapter. The FreeBSD on Laptops article part 4. Power Management is from the ancient times of FreeBSD 10.1-RELEASE. There is some information on the FreeBSD Wiki page but parts of it are outdated.

FreeBSD offers many mechanisms in the power management department:

  • power off devices without attached driver
  • scale CPU frequency and power
  • supports CPU sleep states (C1/C1E/C2/C3/…)
  • enabling/disabling Turbo Mode available in most CPUs
  • per USB device power management options
  • SATA/AHCI channels/controllers power management
  • suspend/resume support (along with using laptop lid for it)
  • support for vendor specific tools that help to measure power management
  • tools and ACPI support for fan speed control
  • tools and ACPI support for setting screen brightness
  • battery capacity status and running time estimation
  • network interfaces power saving options

One word about different files for the settings in the FreeBSD system:

  • /etc/rc.conf – does not require reboot just daemons reloading
  • /etc/sysctl.conf – does not require reboot – you can set them at runtime
  • /boot/loader.conf – these settings REQUIRE reboot

Here is the Table of Contents (non-clickable) for the article.

  • Information
    • Battery
    • Battery Wear
    • CPU
    • lscpu(1)
    • dmesg(8)
  • CPU Frequency Scaling
    • powerd(8)
    • powerdxx(8)
    • C-States
    • CPU Turbo Mode
  • USB Devices
  • SATA/AHCI Power Management
  • Devices without Driver
    • Nvidia Optimus
  • Suspend and Resume
  • Network Interfaces
  • Vendor Tools
  • DTrace
  • Other
    • ZFS
    • Applications
  • Hardware
  • UPDATE 1 – Graphics Card Power Saving
  • UPDATE 2 – AMD CPU Temperatures
  • UPDATE 3 – Suspend/Resume Tips

Information

Let’s start by describing where to get needed information about current CPU speed, used C-states, current power management modes for USB devices, battery capacity and remaining time, etc.

Battery

To get battery information you can use the acpiconf(8) tool. This is the acpiconf(8) output for my main battery (in the ThinkPad T420s laptop) with AC power attached.

% acpiconf -i 0
Design capacity:        44000 mWh
Last full capacity:     37930 mWh
Technology:             secondary (rechargeable)
Design voltage:         11100 mV
Capacity (warn):        1896 mWh
Capacity (low):         200 mWh
Low/warn granularity:   1 mWh
Warn/full granularity:  1 mWh
Model number:           45N1037
Serial number:          28608
Type:                   LION
OEM info:               SANYO
State:                  high
Remaining capacity:     100%
Remaining time:         unknown
Present rate:           0 mW
Present voltage:        12495 mV

… and with AC power detached.

% acpiconf -i 0
Design capacity:        44000 mWh
Last full capacity:     37930 mWh
Technology:             secondary (rechargeable)
Design voltage:         11100 mV
Capacity (warn):        1896 mWh
Capacity (low):         200 mWh
Low/warn granularity:   1 mWh
Warn/full granularity:  1 mWh
Model number:           45N1037
Serial number:          28608
Type:                   LION
OEM info:               SANYO
State:                  high
Remaining capacity:     100%
Remaining time:         2:31
Present rate:           0 mW
Present voltage:        12492 mV

Now as AC power is detached from the laptop the Remaining time: field will show you remaining time estimation for this single battery shows as 2:31 here (two hours and thirty one minutes).

Below is acpiconf(8) output for my secondary battery (in ThinkPad T420s ultrabay instead of DVD drive).

% acpiconf -i 1
Design capacity:        31320 mWh
Last full capacity:     24510 mWh
Technology:             secondary (rechargeable)
Design voltage:         10800 mV
Capacity (warn):        1225 mWh
Capacity (low):         200 mWh
Low/warn granularity:   1 mWh
Warn/full granularity:  1 mWh
Model number:           45N1041
Serial number:            260
Type:                   LiP
OEM info:               SONY
State:                  high
Remaining capacity:     100%
Remaining time:         unknown
Present rate:           0 mW
Present voltage:        12082 mV

… and with AC power detached.

% acpiconf -i 1
Design capacity:        31320 mWh
Last full capacity:     24510 mWh
Technology:             secondary (rechargeable)
Design voltage:         10800 mV
Capacity (warn):        1225 mWh
Capacity (low):         200 mWh
Low/warn granularity:   1 mWh
Warn/full granularity:  1 mWh
Model number:           45N1041
Serial number:            260
Type:                   LiP
OEM info:               SONY
State:                  discharging
Remaining capacity:     98%
Remaining time:         1:36
Present rate:           14986 mW
Present voltage:        11810 mV

With AC power detached it shows the Remaining time: as 1:36 for the secondary battery.

So its total 4:07 time on battery estimated. The same time in minutes (247) will be shown in the sysctl(8) value named hw.acpi.battery.time as shown below.

% sysctl hw.acpi.battery.time
hw.acpi.battery.time: 247

You can also get more ‘complete’ battery information with below sysctl(8) values under hw.acpi.battery MIB.

% sysctl hw.acpi.battery
hw.acpi.battery.info_expire: 5
hw.acpi.battery.units: 2
hw.acpi.battery.state: 1
hw.acpi.battery.time: 247
hw.acpi.battery.life: 99

The hw.acpi.battery.time will show you ‘-1‘ value if you have AC power attached.

% sysctl hw.acpi.battery
hw.acpi.battery.info_expire: 5
hw.acpi.battery.units: 2
hw.acpi.battery.state: 0
hw.acpi.battery.time: -1
hw.acpi.battery.life: 100

Battery Wear

As time passes by batteries lose their ‘design’ capacity. After 1-2 years such battery can have only 70% or less of its original efficiency.

All the information needed to check that is provided by the acpiconf(8) command with Design capacity: and Last full capacity: values. I have made a battery-capacity.sh script that will tell you what the current battery efficiency is. Here is how it looks in action.

% battery-capacity.sh 0
Battery '0' model '45N1037' has efficiency: 86%

% battery-capacity.sh 1
Battery '1' model '45N1041' has efficiency: 78%

Here is the battery-capacity.sh script itself.

#! /bin/sh

if [ ${#} -ne 1 ]
then
  echo "usage: ${0##*/} BATTERY"
  exit
fi

if acpiconf -i ${1} 1> /dev/null 2> /dev/null
then
  DATA=$( acpiconf -i ${1} )
  MAX=$( echo "${DATA}" | grep '^Design\ capacity:'     | awk -F ':' '{print $2}' | tr -c -d '0-9' )
  NOW=$( echo "${DATA}" | grep '^Last\ full\ capacity:' | awk -F ':' '{print $2}' | tr -c -d '0-9' )
  MOD=$( echo "${DATA}" | grep '^Model\ number:'        | awk -F ':' '{print $2}' | awk '{print $1}' )
  echo -n "Battery '${1}' model '${MOD}' has efficiency: "
  printf '%1.0f%%\n' $( bc -l -e "scale = 2; ${NOW} / ${MAX} * 100" -e quit )
else
  echo "NOPE: Battery '${1}' does not exists on this system."
  echo "INFO: Most systems has only '0' or '1' batteries."
  exit 1
fi

CPU

To get information about current CPU’s you will have to use dev.cpu MIB or dev.cpu.0 for the first physical CPU core.

% sysctl dev.cpu.0
dev.cpu.0.cx_method: C1/hlt C2/io
dev.cpu.0.cx_usage_counters: 412905 0
dev.cpu.0.cx_usage: 100.00% 0.00% last 290us
dev.cpu.0.cx_lowest: C1
dev.cpu.0.cx_supported: C1/1/1 C2/3/104
dev.cpu.0.freq_levels: 2501/35000 2500/35000 2200/29755 2000/26426 1800/23233 1600/20164 1400/17226 1200/14408 1000/11713 800/9140
dev.cpu.0.freq: 800
dev.cpu.0.%parent: acpi0
dev.cpu.0.%pnpinfo: _HID=none _UID=0
dev.cpu.0.%location: handle=\_PR_.CPU0
dev.cpu.0.%driver: cpu
dev.cpu.0.%desc: ACPI CPU

If you load the coretemp(4) kernel module with kldload(8) command you will get additional temperature information.

Below is same sysctl(8) dev.cpu.0 MIB with coretemp(4) kernel module loaded.

% sysctl dev.cpu.0
dev.cpu.0.temperature: 49.0C
dev.cpu.0.coretemp.throttle_log: 0
dev.cpu.0.coretemp.tjmax: 100.0C
dev.cpu.0.coretemp.resolution: 1
dev.cpu.0.coretemp.delta: 51
dev.cpu.0.cx_method: C1/hlt C2/io
dev.cpu.0.cx_usage_counters: 16549 0
dev.cpu.0.cx_usage: 100.00% 0.00% last 1489us
dev.cpu.0.cx_lowest: C1
dev.cpu.0.cx_supported: C1/1/1 C2/3/104
dev.cpu.0.freq_levels: 2501/35000 2500/35000 2200/29755 2000/26426 1800/23233 1600/20164 1400/17226 1200/14408 1000/11713 800/9140
dev.cpu.0.freq: 800
dev.cpu.0.%parent: acpi0
dev.cpu.0.%pnpinfo: _HID=none _UID=0
dev.cpu.0.%location: handle=\_PR_.CPU0
dev.cpu.0.%driver: cpu
dev.cpu.0.%desc: ACPI CPU

Let me describe some most useful ones.

CPU core temperature.
dev.cpu.0.temperature: 49.0C

CPU supported C-states (C1 and C2 for this CPU).
dev.cpu.0.cx_supported: C1/1/1 C2/3/104

CPU statistics for C-states usage (only C1 state been used).
dev.cpu.0.cx_usage_counters: 16549 0
dev.cpu.0.cx_usage: 100.00% 0.00% last 1489us

CPU maximum (most deep) C state enabled.
dev.cpu.0.cx_lowest: C1

CPU supported frequency levels with power usage after the ‘/‘ character. The 2500/35000 can be read as 2.5 GHz frequency with 35 W power usage and 2501 is the Turbo Mode. The lowest is 800 MHz with about 9 W usage.
dev.cpu.0.freq_levels: 2501/35000 2500/35000 2200/29755 2000/26426 1800/23233 1600/20164 1400/17226 1200/14408 1000/11713 800/9140

CPU current frequency (will vary when You use powerd(8) or powerdxx(8) daemon).
dev.cpu.0.freq: 800

The hw.acpi.thermal.tz0.temperature MIB will also show you current thermal zone temperature.

% sysctl hw.acpi.thermal.tz0.temperature
hw.acpi.thermal.tz0.temperature: 49.1C

To check how many cores you have use these commands.

% grep FreeBSD/SMP /var/run/dmesg.boot
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)

% sysctl kern.smp.cpus
kern.smp.cpus: 2

If my description does not feel useful then you should also check the -d flag for sysctl(8) command as shown below.

% sysctl -d dev.cpu.0.freq
dev.cpu.0.freq: Current CPU frequency

lscpu(1)

There is also third party tool called lscpu(8) that will describe your CPU features and model. You will have to add it from packages.

# pkg install lscpu

To make lscpu(8) work the cpuctl(4) kernel module is needed.

Here is how it looks for my dual core CPU.

# kldload cpuctl
# lscpu
Architecture:            amd64
Byte Order:              Little Endian
Total CPU(s):            2
Thread(s) per core:      2
Core(s) per socket:      2
Socket(s):               0
Vendor:                  GenuineIntel
CPU family:              6
Model:                   42
Model name:              Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
Stepping:                7
L1d cache:               32K
L1i cache:               32K
L2 cache:                256K
L3 cache:                3M
Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 cflsh ds acpi mmx fxsr sse sse2 ss htt tm pbe sse3 pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline aes xsave osxsave avx syscall nx rdtscp lm lahf_lm

dmesg(8)

Also dmesg(8) command (or /var/run/dmesg.boot file after longer uptime) covers your CPU model and features information.

% grep CPU /var/run/dmesg.boot
CPU: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz (2491.97-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
cpu0:  on acpi0
coretemp0:  on cpu0

CPU Frequency Scaling

For CPU scaling feature you may use the powerd(8) daemon available in the FreeBSD base system or powerdxx(8) from the FreeBSD Ports or packages. The powerdxx(8) daemon aims to better scale multicore systems and not turning all cores to high state when there is moderate load on the system but some people may prefer that approach to have full power available when they do anything and to save power when they do nothing. Thus powerd(8) is not better then powerdxx(8) or vice versa. They are just different so that gives you more options for your needs.

No matter which one you will choose it has to be configured in the /etc/rc.conf file.

powerd(8)

Here are the options for powerd(8) daemon.

powerd_enable=YES
powerd_flags="-n adaptive -a hiadaptive -b adaptive -m 800 -M 1600"

The -n option of for the unknown state – if for some reason the powerd(8) will not be able to determine if you are running on the AC power or battery. The -a is for AC power and -b for running on the battery. The adaptive setting is less ‘aggressive’ so its more battery time friendly. The hiadaptive is more aggressive this its preferred when you are running on AC power. The -m option sets minimum CPU frequency to be used and -M the maximum. Both in MHz units. Check powerd(8) man page for more details.

powerdxx(8)

First you will need to install it.

# pkg install powerdxx

Then its options are identical as those of powerd(8) daemon.

powerdxx_enable=YES
powerdxx_flags="-n adaptive -a hiadaptive -b adaptive -m 800 -M 1600"

Check the powerdxx(8) section above for the flags/parameters description.

Decade ago CPU frequency scaling on FreeBSD was not that ‘easy’ as it is now, you may check my old HOWTO: FreeBSD CPU Scaling and Power Saving in that topic from 2008.

C-States

The C-states can be configured in the /etc/rc.conf file with these options.

  • performance_cx_lowest
  • economy_cx_lowest

The economy_cx_lowest parameter is for running on battery and performance_cx_lowest parameter is for running on AC power. Both are set using the /etc/rc.d/power_profile script used by rc(8) subsystem. It sets the hw.acpi.cpu.cx_lowest parameter which sets/controls all dev.cpu.*.cx_lowest values. You can also track the changes in the /var/log/messages file when you attach/detach the AC power.

% tail -f /var/log/messages
Nov 28 13:14:42 t420s power_profile[48231]: changed to 'economy'
Nov 28 13:14:46 t420s power_profile[56835]: changed to 'performance'

Usually I jest use these values.

performance_cx_lowest=C1
economy_cx_lowest=Cmax

These settings above are generally sufficient for most systems. To check which C-states your CPU supports get the value of dev.cpu.0.cx_supported MIB.

% sysctl dev.cpu.0.cx_supported
dev.cpu.0.cx_supported: C1/1/1 C2/3/104

My CPU supports only C1 and C2 but yours may support more. I remember once when using some old Core 2 Duo laptop that the C2 state had quite ‘noticeable’ delay when getting back from C1 (running) state to C2 (sleep) state so following setting is needed. You do not use the performance_cx_lowest and economy_cx_lowest parameters. You set the first core to C1 and all other cores to C2. This way even on battery you have fully responsive system and all other cores may sleep and save energy.

For example if You would have 4 cores and your maximum (deepest) supported C-state would be C3, then you would put these into the /etc/sysctl.conf file.

% grep cx_lowest /etc/sysctl.conf
dev.cpu.0.cx_lowest=C1
dev.cpu.1.cx_lowest=C3
dev.cpu.2.cx_lowest=C3
dev.cpu.3.cx_lowest=C3

CPU Turbo Mode

There are two ways to enable Turbo mode. One way is to set powerd(8) or powerdxx(8) daemon with maximum frequency set above nominal CPU speed. For example if you have CPU described as dual-core 2.3 GHz then set the maximum speed with -M flag to 4000 for example (which would mean 4GHz). If you do not use CPU frequency scaling daemon then you will use dev.cpu.0.freq parameter with highest (first) value from the dev.cpu.0.freq_levels MIB.

Supported CPU frequency levels on my system.

% sysctl dev.cpu.0.freq_levels 
dev.cpu.0.freq_levels: 2501/35000 2500/35000 2200/29755 2000/26426 1800/23233 1600/20164 1400/17226 1200/14408 1000/11713 800/9140

The highest value (left) is 2501/35000 so I need to set dev.cpu.0.freq parameter with this value to use Turbo Mode. You need to only use the ‘frequency’ value part because if you paste it with power requirements description it will fail.

# sysctl dev.cpu.0.freq=2501/35000
sysctl: invalid integer '2501/35000'

This is how it should be used.

# sysctl dev.cpu.0.freq=2501
dev.cpu.0.freq: 800 -> 2501

USB Devices

To list attached USB devices use the usbconfig(8) tool.

% usbconfig
ugen1.1:  at usbus1, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA)
ugen2.1:  at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.1:  at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen2.2:  at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2:  at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.3:  at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (100mA)
ugen2.3:  at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)

You will see that pwr parameter (short for power) will show you current power setting which can be:

  • ON
  • OFF
  • SAVE

To set new USB power option for the ugen1.1 device also use the usbconfig(8) tool with the power_save parameter in the following way.

# usbconfig -u 1 -a 1 power_save

The USB power management does not have dedicated config file on FreeBSD so we will put them into universal /etc/rc.local file which is being run at the end of the start-up process managed by the rc(8) subsystem. Here is the added content with exception for the ‘Lenovo USB Receiver‘ which is my wireless mouse.

% grep -A 10 POWER /etc/rc.local
# POWER SAVE USB DEVICES
usbconfig \
  | grep -v 'Lenovo USB Receiver' \
  | awk '{print $1}' \
  | sed 's|ugen||'g \
  | tr -d : \
  | awk -F '.' '{print $1 " " $2 }' \
  | while read U A
    do
      usbconfig -u ${U} -a ${A} power_save 2> /dev/null
    done

It’s good idea to NOT save power for mouse or tracked devices because you will probably find it annoying to have to wait about a second each time you would like to use it. I use a for loop to set power saving for all USB devices except wireless USB mouse (identified as ‘Lenovo USB Receiver‘ device).

SATA/AHCI Power Management

FreeBSD offers AHCI channels power management via acpich(4) driver. These power management settings cen be set at boot using the hint.ahcich.*.pm_level parameter in the /boot/loader.conf file. I use configuration up to 8 channels while I only have three.

% grep ahcich /var/run/dmesg.boot
ahcich0:  at channel 0 on ahci0
ahcich1:  at channel 1 on ahci0
ahcich4:  at channel 4 on ahci0
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0

That is because settings for non-existent devices are harmless and will not display any error messages but you will not have to use different settings for various systems which saves time. This is the hint.ahcich.*.pm_level description from the ahci(4) man page.

  hint.ahcich.X.pm_level

    controls SATA interface Power Management for the specified channel,
    allowing some power to be saved at the cost of additional command latency.

    Some controllers, such as ICH8, do not implement modes 2 and 3 with NCQ
    used. Because of artificial entering latency, performance degradation in
    modes 4 and 5 is much smaller then in modes 2 and 3.

Possible power management options are:

  • 0 – interface Power Management is disabled (default)
  • 1 – device is allowed to initiate PM state change, host is passive
  • 2 – host initiates PARTIAL PM state transition every time port becomes idle
  • 3 – host initiates SLUMBER PM state transition every time port becomes idle
  • 4 – driver initiates PARTIAL PM state transition 1ms after port becomes idle
  • 5 – driver initiates SLUMBER PM state transition 125ms after port becomes idle

Here are my setting from the /boot/loader.conf file.

# AHCI POWER MANAGEMENT FOR EVERY USED CHANNEL (ahcich 0-7)
  hint.ahcich.0.pm_level=5
  hint.ahcich.1.pm_level=5
  hint.ahcich.2.pm_level=5
  hint.ahcich.3.pm_level=5
  hint.ahcich.4.pm_level=5
  hint.ahcich.5.pm_level=5
  hint.ahcich.6.pm_level=5
  hint.ahcich.7.pm_level=5

Devices without Driver

FreeBSD has power saving option to not power devices that does not have attached driver. Its called hw.pci.do_power_nodriver and you can set it in the /boot/loader.conf file. Here is its description from then pci(4) man page.

  hw.pci.do_power_nodriver (Defaults to 0)

    Place devices into a low power state (D3) when
    a suitable device driver is not found.

It can be set to one of the following values:

  • 0 – All devices are left fully powered (defaults).
  • 1 – Like ‘2‘ except that storage controllers are also not powered down.
  • 2 – Powers down most devices (display/memory/peripherals not powered down).
  • 3 – Powers down all PCI devices without a device driver.

Here is my setting from the /boot/loader.conf file.

# POWER OFF DEVICES WITHOUT ATTACHED DRIVER
  hw.pci.do_power_nodriver=3

The pciconf(8) utility will show you what devices are in your system and which driver is attached to it. If no driver is attached you will see none*@ for such devices, as none0@ below. You can also check man page for most drivers like em(4) man page for em0 device or xhci(4) page for xhci0 device.

% pciconf -l
hostb0@pci0:0:0:0:      class=0x060000 card=0x21d217aa chip=0x01048086 rev=0x09 hdr=0x00
vgapci0@pci0:0:2:0:     class=0x030000 card=0x21d217aa chip=0x01268086 rev=0x09 hdr=0x00
none0@pci0:0:22:0:      class=0x078000 card=0x21d217aa chip=0x1c3a8086 rev=0x04 hdr=0x00
em0@pci0:0:25:0:        class=0x020000 card=0x21ce17aa chip=0x15028086 rev=0x04 hdr=0x00
ehci0@pci0:0:26:0:      class=0x0c0320 card=0x21d217aa chip=0x1c2d8086 rev=0x04 hdr=0x00
hdac0@pci0:0:27:0:      class=0x040300 card=0x21d217aa chip=0x1c208086 rev=0x04 hdr=0x00
pcib1@pci0:0:28:0:      class=0x060400 card=0x21d217aa chip=0x1c108086 rev=0xb4 hdr=0x01
pcib2@pci0:0:28:1:      class=0x060400 card=0x21d217aa chip=0x1c128086 rev=0xb4 hdr=0x01
pcib3@pci0:0:28:3:      class=0x060400 card=0x21d217aa chip=0x1c168086 rev=0xb4 hdr=0x01
pcib4@pci0:0:28:4:      class=0x060400 card=0x21d217aa chip=0x1c188086 rev=0xb4 hdr=0x01
ehci1@pci0:0:29:0:      class=0x0c0320 card=0x21d217aa chip=0x1c268086 rev=0x04 hdr=0x00
isab0@pci0:0:31:0:      class=0x060100 card=0x21d217aa chip=0x1c4f8086 rev=0x04 hdr=0x00
ahci0@pci0:0:31:2:      class=0x010601 card=0x21d217aa chip=0x1c038086 rev=0x04 hdr=0x00
ichsmb0@pci0:0:31:3:    class=0x0c0500 card=0x21d217aa chip=0x1c228086 rev=0x04 hdr=0x00
iwn0@pci0:3:0:0:        class=0x028000 card=0x11118086 chip=0x42388086 rev=0x3e hdr=0x00
sdhci_pci0@pci0:5:0:0:  class=0x088000 card=0x21d217aa chip=0xe8221180 rev=0x07 hdr=0x00
xhci0@pci0:13:0:0:      class=0x0c0330 card=0x01941033 chip=0x01941033 rev=0x04 hdr=0x00

You can also use -v flag to get more detailed information.

% pciconf -l -v
(...)
xhci0@pci0:13:0:0:      class=0x0c0330 card=0x01941033 chip=0x01941033 rev=0x04 hdr=0x00
    vendor     = 'NEC Corporation'
    device     = 'uPD720200 USB 3.0 Host Controller'
    class      = serial bus
    subclass   = USB

Nvidia Optimus

If for some reason your BIOS/UEFI firmware does not allow you to disable Nvidia discrete graphics card you may use this script to disable it so it will not drain power from your system. It requires the acpi_call(4) kernel module which is provided by the acpi_call package.

# mkdir /root/bin
# cd /root/bin
# fetch https://people.freebsd.org/~xmj/turn_off_gpu.sh
# pkg install acpi_call
# kldload acpi_call
# chmod +x /root/bin/turn_off_gpu.sh
# /root/bin/turn_off_gpu.sh

You may add it to the /etc/rc.local file after the USB power saving options with this entry.

# DISABLE NVIDIA CARD
  /root/bin/turn_off_gpu.sh

It successd it will store the working ACPI call in the /root/.gpu_method file and execute it each next time.

Suspend and Resume

The biggest enemies of supend/resume mechanism are bugs in your BIOS/UEFI firmware for your hardware. Sometimes disabling Bluetooth helps – that is the option for ThinkPad T420s for example. To check which suspend modes are supported on your system check the hw.acpi.supported_sleep_state MIB from sysctl(8) subsystem.

% sysctl hw.acpi.supported_sleep_state
hw.acpi.supported_sleep_state: S3 S4 S5

To enter ACPI S3 sleep state (suspend) you can use acpiconf(8) tool or zzz(8) tool.

# zzz

… or with acpiconf(8) tool.

# acpiconf -s 3

Its exactly the same as stated in the zzz(8) man page.

You can also set sysctl(8) value that everytime you close your laptop lid your system will go to sleep. To achieve that put hw.acpi.lid_switch_state=S3 into the /etc/sysctl.conf file. No matter if you put you hardware to sleep by command or by closing the lid your laptop will resume after opening the lid. Of course if you haven’t closed the lid after the zzz(8) command you will either have to close and open the lid or push the power button to resume. Of course you may also suspend/resume desktops or even your backup server if it has its purpose. It’s not limited to laptops only.

There are also dedicated kernel modules for various vendor ACPI subsystems. Here they are:

  • /boot/kernel/acpi_asus_wmi.ko
  • /boot/kernel/acpi_asus.ko
  • /boot/kernel/acpi_dock.ko
  • /boot/kernel/acpi_fujitsu.ko
  • /boot/kernel/acpi_hp.ko
  • /boot/kernel/acpi_ibm.ko
  • /boot/kernel/acpi_panasonic.ko
  • /boot/kernel/acpi_sony.ko
  • /boot/kernel/acpi_toshiba.ko
  • /boot/kernel/acpi_video.ko
  • /boot/kernel/acpi_wmi.ko

For example if you have IBM/Lenovo ThinkPad the you will use the acpi_ibm.ko kernel module.

# kldload acpi_ibm

After loading each module you will get new sysctl(8) values for your use. For example related to fan speed, keyboard backlit or screen brightness. Below is new dev.acpi_ibm section in sysctl(8) after loading the acpi_ibm(4) kernel module.

% sysctl dev.acpi_ibm
dev.acpi_ibm.0.handlerevents: NONE
dev.acpi_ibm.0.mic_led: 0
dev.acpi_ibm.0.fan: 0
dev.acpi_ibm.0.fan_level: 0
dev.acpi_ibm.0.fan_speed: 0
dev.acpi_ibm.0.wlan: 1
dev.acpi_ibm.0.bluetooth: 0
dev.acpi_ibm.0.thinklight: 0
dev.acpi_ibm.0.mute: 0
dev.acpi_ibm.0.volume: 0
dev.acpi_ibm.0.lcd_brightness: 0
dev.acpi_ibm.0.hotkey: 1425
dev.acpi_ibm.0.eventmask: 134217727
dev.acpi_ibm.0.events: 1
dev.acpi_ibm.0.availmask: 134217727
dev.acpi_ibm.0.initialmask: 2060
dev.acpi_ibm.0.%parent: acpi0
dev.acpi_ibm.0.%pnpinfo: _HID=LEN0068 _UID=0
dev.acpi_ibm.0.%location: handle=\_SB_.PCI0.LPC_.EC__.HKEY
dev.acpi_ibm.0.%driver: acpi_ibm
dev.acpi_ibm.0.%desc: IBM ThinkPad ACPI Extras
dev.acpi_ibm.%parent: 

Here are descriptions of more interesting ones.

This one will turn the LED light on the Microphone mute button.
dev.acpi_ibm.0.mic_led

Select if you want to manage CPU fan (0) or leave it to the manufacturer defaults (1).
dev.acpi_ibm.0.fan

If CPU fan is enabled, set its speed.
dev.acpi_ibm.0.fan_level

This one will tell you how fast the CPU fan is spinning (in RPMs).
dev.acpi_ibm.0.fan_speed

Enable/disable WiFi (if its enabled in BIOS).
dev.acpi_ibm.0.wlan

Enable/disable Bluetooth (if its enabled in BIOS).
dev.acpi_ibm.0.bluetooth

Enable/disable ThinkLight.
dev.acpi_ibm.0.thinklight

Mute/unmute speakers.
dev.acpi_ibm.0.mute

Speakers volume.
dev.acpi_ibm.0.volume

Screen brightness.
dev.acpi_ibm.0.lcd_brightness

For most of the cases its not needed to use them as you will probably just use the vendor defined keyboard shortcuts (probably with Fn key) or vendor specific dedicated buttons. Sometimes you want to create/use your own setup or need custom keyboard shortcuts, or you want to control the fan speed depending on the CPU temperature other way then your vendor predefined it. This is when these dedicated ACPI kernel modules are most useful.

For example I recently thought that my CPU fan seems to be little louder then it should be so I created custom cron(8) based acpi-thinkpad-fan.sh script to use lower fan speeds or even lower quieter speeds when CPU temperature is low enough.

I will post it here. Maybe you will find it useful for your purposes. To describe it shortly it disables the fan when CPU temperature is below 50 (C) degrees, it sets it to level ‘1’ if its between 50 (C) and 60 (C) degrees and sets it to level ‘3’ when temperature reaches more then 60 (C) degrees.

#! /bin/sh

if ! kldstat | grep -q acpi_ibm.ko
then
  doas kldload acpi_ibm
fi

doas sysctl dev.acpi_ibm.0.fan=0 1> /dev/null 

TEMP=$( sysctl -n hw.acpi.thermal.tz0.temperature | awk -F'.' '{print $1}' )

if [ ${TEMP} -lt 50 ]
then
  doas sysctl dev.acpi_ibm.0.fan_level=0 1> /dev/null
  exit 0
fi

if [ ${TEMP} -lt 60 ]
then
  doas sysctl dev.acpi_ibm.0.fan_level=1 1> /dev/null
  exit 0
fi

if [ ${TEMP} -ge 60 ]
then
  doas sysctl dev.acpi_ibm.0.fan_level=3 1> /dev/null
  exit 0
fi

… and here is its crontab(5) entry:

% crontab -l
# ACPI/IBM/FAN
* * * * * ~/scripts/acpi-thinkpad-fan.sh

Network Interfaces

There is also ifconfig(8) option to save power if a driver supports such feature, its called powersave and its used like that.

# ifconfig wlan0 powersave

I use it in my network.sh network management script described broadly in the FreeBSD Network Management with network.sh article.

Vendor Tools

There are also vendor tools available on FreeBSD like powermon(8) for example. Remember that it requires cpuctl(4) kernel module to work.

# pkg install powermon
# kldload cpuctl
# powermon
                  Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
                      (Arch: Sandy Bridge, Limit: 44W)



   5.11W [=======>                                                           ]



 Package:           Uncore:             x86 Cores:          GPU:
 Current: 5.11W     Current: 3.17W      Current: 1.73W      Current: 0.21W
 Total: 98.33J      Total: 60.86J       Total: 33.49J       Total: 3.98J

DTrace

The dynamic tracing framework that like ZFS found its way from Solaris/Illumos to FreeBSD may be also useful weapon in the battle for more time on your battery.

First add the dtrace-toolkit package.

# pkg install dtrace-toolkit

Your system stops saving energy or wakes CPU up because something needs to be run/done. To check what is run on your system you mostly run ps(1) or top(1) utilities but that will not show you what exactly is being started or how often something is being run. This is where DTrace comes with help.

We will use the /usr/share/dtrace/toolkit/execsnoop script from the dtrace-toolkit package. It will print EVERY COMMAND that is being run with all its arguments.It will remain silent when no commands are run, be advised.

Here is example output for my dzen2 toolbar update.

# /usr/local/share/dtrace-toolkit/execsnoop 
  UID    PID   PPID ARGS
 1000  97748  97509 /usr/local/bin/zsh -c ~/scripts/dzen2-update.sh > ~/.dzen2-fifo
 1000  97748      1 /bin/sh /home/vermaden/scripts/dzen2-update.sh
 1000  99157  97748 sysctl -n kern.smp.cpus
 1000    311  97748 ps ax -o %cpu,rss,command -c
 1000   3118   1521 awk -v SMP=200 /\ idle$/ {printf("%.1f%%",SMP-$1)}
 1000   4462  97748 date +%Y/%m/%d/%a/%H:%M
 1000   4801  97748 sysctl -n dev.cpu.0.freq
 1000   6009  97748 sysctl -n hw.acpi.thermal.tz0.temperature
 1000   6728  97748 sysctl -n vm.stats.vm.v_inactive_count
 1000   7043  97748 sysctl -n vm.stats.vm.v_free_count
 1000   7482  97748 sysctl -n vm.stats.vm.v_cache_count
 1000  10363   8568 bc -l
 1000  10863  10363 dc -x
 1000  13143   7773 grep --color -q ^\.
 1000  13798  97748 /bin/sh /home/vermaden/scripts/__conky_if_ip.sh
 1000  15089  14235 ifconfig -u
 1000  16439  14235 grep -v 127.0.0.1
 1000  17738  14235 grep -c inet 
 1000  19069  18612 ifconfig -l -u
 1000  19927  18612 sed s/lo0//g
 1000  20772  13798 ifconfig wlan0
 1000  23388  21410 grep ssid
 1000  24588  13798 grep -q "
 1000  25965  25282 awk /ssid/ {print $2}
 1000  27917  27217 awk /inet / {print $2}
 1000  29941  97748 /bin/sh /home/vermaden/scripts/__conky_if_gw.sh
 1000  32808  31412 route -n -4 -v get default
 1000  34012  31412 awk END{print $2}
 1000  34895  97748 /bin/sh /home/vermaden/scripts/__conky_if_dns.sh
 1000  36118  34895 awk /^nameserver/ {print $2; exit} /etc/resolv.conf
 1000  37628  97748 /bin/sh /home/vermaden/scripts/__conky_if_ping.sh dzen2
 1000  38829  37628 ping -c 1 -s 0 -t 1 -q 9.9.9.9
 1000  42079  41566 mixer -s vol
 1000  42177  41566 awk -F : {printf("%s",$2)}
 1000  44434  43254 zfs list -H -d 0 -o name,avail
 1000  45866  43254 awk {printf("%s/%s ",$1,$2)}
 1000  47004  97748 /bin/sh /home/vermaden/scripts/__conky_battery_separate.sh dzen2
 1000  48282  47004 sysctl -n hw.acpi.battery.units
 1000  49494  47004 sysctl -n hw.acpi.battery.life
 1000  49948  47004 sysctl -n hw.acpi.acline
 1000  52073  51441 acpiconf -i 0
 1000  53055  51441 awk /^State:/ {print $2}
 1000  53981  53186 acpiconf -i 0
 1000  55354  53186 awk /^Remaining capacity:/ {print $3}
 1000  55968  55631 acpiconf -i 1
 1000  57187  55631 awk /^State:/ {print $2}
 1000  58405  57471 acpiconf -i 1
 1000  59201  57471 awk /^Remaining capacity:/ {print $3}
 1000  60961  59252 bsdgrep -v -E (COMMAND|idle)$
 1000  63534  59252 head -3
 1000  62194  59252 sort -r -n
 1000  64629  59252 awk {printf("%s/%d%%/%.1fGB ",$3,$1,$2/1024/1024)}
 1000  64634  93198 tail -1 /home/vermaden/.dzen2-fifo

Lots of processes just to update the information on the top of the screen. That is why I refresh dzen2 information only every 5 minutes and if I want exact information and system status for current moment I just ‘click’ on then dzen2 bar to run all these commands and refresh itself.

This way using DTrace you will know if something unwanted does not steal you precious battery time. You may find such dzen2 config in my FreeBSD Desktop – Part 13 – Configuration – Dzen2 article.

Other

ZFS

By default ZFS will commit transaction group every 5 seconds and that is good default setting for the vfs.zfs.txg.timeout parameter. You may want to increase it a little if needed. To 10 for example. I say about that parameter mostly because lots of guides advice to set it to 1 for various performance reasons but keep in mind that setting it to 1 will prevent your disk (and CPU) from going to sleep thus draining more battery life.

If you want to mess with vfs.zfs.txg.timeout value set it in the /boot/loader.conf file.

Applications

To get more time on battery used applications are also crucial. For example Thunar uses less CPU time then Caja or Nautilus. The Geany text editor uses less CPU resources and memory then Scite or Gedit editors, even GVim takes more resouces. Not to mention that custom Openbox/Fluxbox/${YOUR_FAVORITE_WM} window manager based setup will consume a lot less CPU time then entire Gnome or Mate environment.

Hardware

It’s sometimes possible to literally buy more battery time. For example when you want to buy new SSD for you laptop then pick not the fastest one but the most power efficient one. You will probably not feel the performance difference anyway but you will appreciate more battery time.

Most RAM modules come with 1.5V current voltage but there is chance that your laptop may support low power DDR modules with 1.35V current thus increasing your battery time. Also keep in mind that each RAM stick uses about 0.5-1.0W of power so using single 8 GB RAM stick will provide you more battery time the the same 8 GB of memory using two 4 GB RAM modules. This also have performance drawback because with single RAM module you will not be able to use dual channel technology so you will limit you RAM speed. Some laptops have even 4 RAM slots (like ThinkPad W520 for example) so without losing anything you should use two 8 GB RAM sticks instead of four 4 GB RAM sticks for longer battery life.

It is sometimes possible to swap your DVD drive to internal secondary battery. Examples of such laptops are Dell Latitude D630, ThinkPad T420s or ThinkPad T500/W500. Sometimes vendors offer entire slice battery that will stick to the bottom of your laptop like slice battery for ThinkPad X220 or T420/T520/W520 laptops or for the 1st generation of ThinkPad X1 laptop.

Hope that this information will help you squeeze some battery time (or at least save some power) on FreeBSD πŸ™‚

UPDATE 1 – Graphics Card Power Saving

If You have the graphics/drm-kmod package installed you probably use the latest i915kms.ko kernel module.

To set maximum power management for integrated Intel graphics cards put these into the /boot/loader.conf file.

# INTEL DRM WITH graphics/drm-kmod PACKAGE (NEW)
# SKIP UNNECESSARY MODE SETS AT BOOT TIME 
  compat.linuxkpi.fastboot=1
# USE SEMAPHORES FOR INTER RING SYNC
  compat.linuxkpi.semaphores=1
# ENABLE POWER SAVING RENDER C-STATE 6
  compat.linuxkpi.enable_rc6=7
# ENABLE POWER SAVING DISPLAY C-STATES
  compat.linuxkpi.enable_dc=2
# ENABLE FRAME BUFFER COMPRESSION FOR POWER SAVINGS
  compat.linuxkpi.enable_fbc=1

In the past these settings below were used but they are not present anymore.

# INTEL DRM WITH graphics/drm-kmod PACKAGE (OLD)
  drm.i915.enable_rc6=7
  drm.i915.semaphores=1
  drm.i915.intel_iommu_enabled=1

UPDATE 2 – AMD CPU Temperatures

While the coretemp(4) kernel module is used for Intel CPUs the amdtemp(4) kernel module will provide additional temperature information for AMD CPUs.

UPDATE 3 – Suspend/Resume Tips

The biggest enemies of supend/resume subsystem are bugs in the BIOS/UEFI firmware. Sometimes disabling the Bluetooth helps – that is the option for the Lenovo ThinkPad T420s for example. On the Lenovo ThinkPad X240 it is disabling the TPM (Trusted Platform Module).

EOF

Β 

FreeBSD Desktop – Part 2.1 – Install FreeBSD 12

This article is an update/rewrite to the already published FreeBSD Desktop – Part 2 – Install. With the upcoming introduction of the FreeBSD 12.0-RELESE version new possibilities arise when it comes to installation. I already talked/showed that method in my ZFS Boot Environments Reloaded at NLUUG presentation but to make it more available and obvious part of my FreeBSD Desktop series I write about it again in dedicated article entry.

You may want to check other articles in the FreeBSD Desktop series on the FreeBSD Desktop – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Now (in FreeBSD 12.x) it is possible to install FreeBSD on GELI encrypted root on ZFS pool without any additional partitions or filesystems. No longer separate UFS or ZFS boot pool /boot filesystem is needed. And what is even more appealing such setup is supported both on UEFI and BIOS (also refereed as Legacy or CSM) systems. Such setup is also compatible with both new bectl(8) utility and the old proven beadm(8) tool. It is also nice that to make such setup you only need to choose the Auto ZFS option from the bsdinstall(8) so you will not have to do it by hand. I advice using GPT (BIOS+UEFI) as it will support both system types so when you are running BIOS system now and will move the disk to other system that boots with UEFI it will also just work out of the box.

The FreeBSD 12.0 is currently at the RC1 stage so we will use that one for below examples of such setup. The 12.0-RELEASE is expected to arise before Christmas if no significant problems or bugs will be found on the road to RC2 and RC3 editions.

For the record here is the FreeBSD 12.0-RC1 Availability information page and aggregated FreeBSD 12.0-RELEASE Release Notes for the upcoming new major FreeBSD version, but it is not yet complete/ready.

I will only show one install process that will work for both UEFI and BIOS systems – the crucial option here is GPT (BIOS+UEFI) to select (which is also the default one). The other option that You need to select is Yes for the Encryption part and also select the SWAP size. You may as well do not use swap and enter ‘0‘ here which means that SWAP partition will not be created. You may as well create ZFS ZVOL partition for SWAP on ZFS pool later or just create a file like /SWAP and enable it as SWAP. No matter which SWAP option you will choose if your system swaps then you are too low on memory and neither of these methods are better or worse then.

freebsd-install-01.png

freebsd-install-02.png

freebsd-install-03.png

One last thing about the default FreeBSD (no matter if 11.x or 12.x) ZFS dataset/filesystem layout. I showed it on my ZFS Boot Environments/ZFS Boot Environments Reloaded presentations but without any text comment as I talked it live.

By default both /var and /usr filesystems are part of the Boot Environment. They are protected and snapshoted during the beadm create newbe process (or by bectl(8) also). Its appears that /var and /usr are separate processes when you type zfs list commend as shown on the slide below.

zroot-layout-01.png

… but when you check the canmount parameter for all ZFS datasets, then it become obvious that /usr and /var are ’empty’ datasets (not mounted).

zroot-layout-02.png

… and also confirmation from theΒ df(1) tool.

zroot-layout-03.png

I asked FreeBSD Developers what is the reason for such construct and its for the mountpoint inheritance purposes. For example when zroot/usr has mountpoint set to /usr then when you create zroot/usr/local dataset, then it will automatically get the /usr/local for the mountpoint parameter by inheritance. At the first sight it may be misleading (I also got caught) but it makes sense when you think about it.

The only filesystems that are NOT included for the Boot Environment protection are these:

  • /usr/home
  • /usr/ports
  • /usr/src
  • /var/audit
  • /var/crash
  • /var/log
  • /var/mail
  • /var/tmp

While in most cases it is not needed to protect these in the Boot Environment protection if you want to also protect these type these two comments to move all the /usr/* and /var/* datasets/filesystems into the Boot Environment pool/ROOT/dataset. It will work on a running system without need for reboot, just make sure you use -u flag.

# zfs rename -u zroot/usr zroot/ROOT/default/usr
# zfs rename -u zroot/var zroot/ROOT/default/var

Now grab that FreeBSD ISO and install it the best possible way up to date πŸ™‚

You will probably want to get amd64 version which is suitable for both 64-bit AMD and Intel systems.

EOF

Β 

Silent Fanless FreeBSD Server – DIY Backup

I already once wrote about this topic at the Silent Fanless FreeBSD Desktop/Server article. To my pleasant surprise BSD NOW Episode 253: Silence of the Fans featured my article for which I am very grateful. Today I would like to show another practical example of such setup and with more hands on approach along with real power usage measurements with power meter. I also got more power efficient ASRock N3150B-ITX motherboard with only 6W TDP which includes 4-core Celeron N3150 CPU and also nice small Supermicro SC101i Mini ITX case. Keep in mind that ASRock also made very similar N3150-ITX motherboard (no ‘B’ in model name) with different ports/connectors that may better suit your needs better.

You may also check the follow up Silent Fanless FreeBSD Server – Redundant Backup article.

Build

Here is how the Supermicro SC101i case looks like with ASRock N3150B-ITX motherboard installed.

silent-backup-case-external.jpg

silent-backup-case-back.jpg

One thing that surprised me very much was the hard disk cost. The internal Seagate 4TB ST4000LM024 2.5 SATA drive costs about $180-190 but the same disk sold as Maxtor M3 4TB 2.5 disk in external case with Maxtor brand (which is owned by Seagate anyway) and USB 3.0 port costs half of that – about $90-100. At least in Europe/Poland location.

I think you do already know where I am going with my thoughts. I will use an external Maxtor M3 4TB 2.5 drive and connect it via the USB 3.0 port in this setup. While SATA III provides theoretical throughput of 6Gbps the USB 3.0 provides 5Gbps theoretical throughput. The difference can be important for low latency high throughput SSD drives that approach 580MB/s speed but not for traditional rotational disks moving gently at 5400RPM.

The maximum performance I was able to squeeze from this Maxtor M3 4TB 2.5 USB 3.0 drive was 90MB/s write speed and 120MB/s read speed using pv(1) tool, and that was at the beginning of the disk. These speeds will drop to about 70MB/s and 90MB/s at the end of the disk respectively for write and read operations. We are not even approaching SATA I standard here which tops at 1.5Gbps. Thus it will not make a difference or not a significant one for sure for such storage.

At first I wanted to make a hole on the motherboard end steel plate (somewhere beside the back ports) with drill to get outside with USB cable from the case and attach it to one of the USB 3.0 ports at the back of the motherboard but fortunately I got better idea. This motherboard has connector for internal USB 3.0 (so called front panel USB on the case) so I bought Akyga AK-CA-57 front panel cable with USB 3.0 port and connected everything inside the case.

This is the Akyga AK-CA-57 USB 3.0 cable.

silent-backup-usb-akyga-cable-AK-CA-57.jpg

If I was going to install two USB 3.0 disks using this method I would use one of these cables instead:

The only problem can be more physical one – will it blend will it fit? Fortunately I was able to find a way to fit it in the case and there is even space for the second disk. As this will be my offsite backup replacement which is only 3rd stage/offsite backup I do not need to create redundant mirror/RAID1 protection but it’s definitely possible with two Maxtor M3 4TB 2.5 USB 3.0 drives.

The opened Supermicro SC101i case with ASRock N3150B-ITX motherboard inside and attached Pico PSU looks like that.

silent-backup-mobo-case.jpg

With attached Akyga AK-CA-57 USB 3.0 cable things get little narrow, but with proper cable lay you will still be able to fit another internal 2.5 SATA disk or external 2.5 USB 3.0 disk.

silent-backup-mobo-case-blue.jpg

I attached Akyga AK-CA-57 cable to this USB 3.0 connector on the motherboard.

silent-backup-mobo-case-usb.jpg

Case with Maxtor M3 4TB disk. The disk placement required little modifications.

silent-backup-mobo-case-blue-disk.jpg

I created custom disk holders using steel plates I got from window mosquito net set for my home but you should be able to get something similar in any hardware shop. I modified them a little with pliers.

silent-backup-handles

I also ‘silenced’ the disk vibrations with felt stickers.

silent-backup-silence.jpg

The silenced disk in the Supermicro SC101i case.

silent-backup-mobo-case-blue-disk-silence.jpg

Ancestor

Before this setup I used Raspberry Pi 2B with external Western Digital 2TB 2.5 USB 3.0 disk but the storage space requirements become larger so I needed to increase that. It was of course with GELI encryption and ZFS with enabled LZ4 compression on top. The four humble ARM32 cores and soldered 1GB of RAM was able to squeeze whooping 5MB/s read/write experience from this ZFS/GELI setup but that was not hurting me as I used rsync(1) for differential backups and the Internet connection to that box was limited to about 1.5MB/s. I would still use that setup but it just won’t boot with that larger Maxtor M3 4TB disk because it requires more power and I already used stronger 5V 3.1A charger then 5V 2.0A suggested by vendor. Even the safe_mode_gpio=4 and max_usb_current=1 options at /boot/msdos/config.txt did not help.

Cost

The complete setup price tops at $220 total. Here are the parts used.

PRICE  COMPONENT
  $59  CPU/Motherboard ASRock N3150B-ITX Mini-ITX
  $14  RAM Crucial 4GB DDR3L 1.35V
  $13  PSU 12V 7.5A 90W Pico (internal)
   $2  PSU 12V 2.5A 30W Leader Electronics (external)
  $29  Supermicro SC101i (used)
   $3  Akyga AK-CA-57 USB 3.0 Cable
   $3  SanDisk Fit 16GB USB 2.0 Drive (system)
  $95  Maxtor M3 4TB 2.5 USB 3.0 Drive (data)
 $220  TOTAL

PSU

In earlier Silent Fanless FreeBSD Desktop/Server article I used quite large 90W PSU from FSP Group. From the PSUs that I owned only ThinkPad W520/W530 bricks can compete in size with this beast. As this motherboard will use very little power (details lower) it will require a lot smaller PSU. As the FSP Group PSU has IEC C14 slot it also requires additional IEC C13 power cable which makes it even bigger solution. The new 12V 2.5A 30W is very compact and also costs fraction of the 90W FSP Group gojira.

New Leader Electronics PSU label.

silent-backup-psu-ext-label.jpg

Below you can see the comparison for yourself.

silent-backup-psu-compare

I also got cheaper and less powerful Pico PSU which now tops as 12V 7.5A 90W power.

silent-backup-psu-pico-12V-90W.jpg

Power Consumption

This is where it gets really interesting. I measured the power consumption with power meter.

silent-backup-power-meter.jpg

Idle

When this box is booted without any media attached it uses only 7.5W of power idling. While the system was idle with SanDisk 16GB USB 2.0 drive (on which FreeBSD was installed) it used about 8.0W of power. When booted with Maxtor M3 4TB disk inside and SanDisk 16GB USB 2.0 drive attached it run idle at about 8.5W of power.

Load

As I do not need full CPU speed I limited the CPU speed in powerd(8) options to 1.2Ghz. With this limit set the fully loaded system with all 4 cores busy at 100% and two dd(8) processes for read both boot SanDisk 16GB drive and Maxtor M3 4TB disk and with GELI enabled ZFS pool doing scrub operation in progress and additional two find(1) processes for both disks it would not pass the 13.9W barrier. Without CPU limitation (that means Intel Turbo Boost enabled) the system used 16.0W of power at most.

Summary of power usage for this box.

 POWER  TYPE  CONFIGURATION
 7.5 W  IDLE  System
 8.0 W  IDLE  System + SanDisk 16GB drive
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
 8.5 W  IDLE  System + SanDisk 16GB drive + Maxtor M3 4TB drive
13.9 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive + CPU 1.2 Ghz limit
16.0 W  LOAD  System + SanDisk 16GB drive + Maxtor M3 4TB drive

For comparision the Raspberry Pi 2B with 16GB MicroSD card attached used only 1.5W but we all know how slow it is. When used with Western Digital 2TB 2.5 USB 3.0 drive it used about 2.2W at idle state.

Configuration for Low Power Consumption

Below are FreeBSD configuration files used in this box to lower the power consumption.

The /etc/sysctl.conf file.

# ANNOYING THINGS
  vfs.usermount=1
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# LIMIT ZFS ARC EFFICIENTLY
  kern.maxvnodes=32768

# ALLOW UPGRADES IN JAILS
  security.jail.chflags_allowed=1

# ALLOW RAW SOCKETS IN JAILS
  security.jail.param.allow.raw_sockets=1
  security.jail.allow_raw_sockets=1

# RANDOM PID
  kern.randompid=12345

# PERFORMANCE/ALL SHARED MEMORY SEGMENTS WILL BE MAPPED TO UNPAGEABLE RAM 
  kern.ipc.shm_use_phys=1

# MEMORY OVERCOMMIT SEE tuning(7)
  vm.overcommit=2

# NETWORK/DO NOT SEND RST ON SEGMENTS TO CLOSED PORTS
  net.inet.tcp.blackhole=2

# NETWORK/DO NOT SEND PORT UNREACHABLES FOR REFUSED CONNECTS
  net.inet.udp.blackhole=1

# NETWORK/ENABLE SCTP BLACKHOLING blackhole(4) FOR MORE DETAILS
  net.inet.sctp.blackhole=1

# NETWORK/MAX SIZE OF AUTOMATIC RECEIVE BUFFER (2097152) [4x]
  net.inet.tcp.recvbuf_max=8388608

# NETWORK/MAX SIZE OF AUTOMATIC SEND BUFFER (2097152) [4x]
  net.inet.tcp.sendbuf_max=8388608

# NETWORK/MAXIMUM SOCKET BUFFER SIZE (5242880) [3.2x]
  kern.ipc.maxsockbuf=16777216

# NETWORK/MAXIMUM LISTEN SOCKET PENDING CONNECTION ACCEPT QUEUE SIZE (128) [8x]
  kern.ipc.soacceptqueue=1024

# NETWORK/DEFAULT tcp MAXIMUM SEGMENT SIZE (536) [2.7x]
  net.inet.tcp.mssdflt=1460

# NETWORK/MINIMUM TCP MAXIMUM SEGMENT SIZE (216) [6x]
  net.inet.tcp.minmss=1300

# NETWORK/LIMIT ON SYN/ACK RETRANSMISSIONS (3)
  net.inet.tcp.syncache.rexmtlimit=0

# NETWORK/USE TCP SYN COOKIES IF THE SYNCACHE OVERFLOWS (1)
  net.inet.tcp.syncookies=0

# NETWORK/ENABLE TCP SEGMENTATION OFFLOAD (1)
  net.inet.tcp.tso=0

# NETWORK/ENABLE IP OPTIONS PROCESSING ([LS]SRR, RR, TS) (1)
  net.inet.ip.process_options=0

# NETWORK/ASSIGN RANDOM ip_id VALUES (0)
  net.inet.ip.random_id=1

# NETWORK/ENABLE SENDING IP REDIRECTS (1)
  net.inet.ip.redirect=0

# NETWORK/IGNORE ICMP REDIRECTS (0)
  net.inet.icmp.drop_redirect=1

# NETWORK/ASSUME SO_KEEPALIVE ON ALL TCP CONNECTIONS (1)
  net.inet.tcp.always_keepalive=0

# NETWORK/DROP TCP PACKETS WITH SYN+FIN SET (0)
  net.inet.tcp.drop_synfin=1

# NETWORK/RECYCLE CLOSED FIN_WAIT_2 CONNECTIONS FASTER (0)
  net.inet.tcp.fast_finwait2_recycle=1

# NETWORK/CERTAIN ICMP UNREACHABLE MESSAGES MAY ABORT CONNECTIONS IN SYN_SENT (1)
  net.inet.tcp.icmp_may_rst=0

# NETWORK/MAXIMUM SEGMENT LIFETIME (30000) [0.27x]
  net.inet.tcp.msl=8192

# NETWORK/ENABLE PATH MTU DISCOVERY (1)
  net.inet.tcp.path_mtu_discovery=0

# NETWORK/EXPIRE TIME OF TCP HOSTCACHE ENTRIES (3600) [2x]
  net.inet.tcp.hostcache.expire=7200

# NETWORK/TIME BEFORE DELAYED ACK IS SENT (100) [0.2x]
  net.inet.tcp.delacktime=20

The /boot/loader.conf file.

# BOOT OPTIONS
  autoboot_delay=1
  boot_mute=YES

# MODULES FOR BOOT
  zfs_load=YES

# DISABLE HYPER THREADING
  machdep.hyperthreading_allowed=0

# REDUCE NUMBER OF SOUND GENERATED INTERRUPTS
  hw.snd.latency=7

# RACCT/RCTL RESOURCE LIMITS
  kern.racct.enable=1

# PIPE KVA LIMIT | 320 MB
  kern.ipc.maxpipekva=335544320

# NUMBER OF SEGMENTS PER PROCESS
  kern.ipc.shmseg=1024

# LARGE PAGE MAPPINGS
  vm.pmap.pg_ps_enabled=1

# SHARED MEMORY
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

# ZFS TUNING
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=32M
  vfs.zfs.arc_max=128M
  vfs.zfs.txg.timeout=1

# NETWORK MAX SEND QUEUE SIZE
  net.link.ifqmaxlen=2048

# POWER OFF DEVICES WITHOUT ATTACHED DRIVER
  hw.pci.do_power_nodriver=3

# AHCI POWER MANAGEMENT FOR EVERY USED CHANNEL (ahcich 0-7)
  hint.ahcich.0.pm_level=5
  hint.ahcich.1.pm_level=5
  hint.ahcich.2.pm_level=5
  hint.ahcich.3.pm_level=5
  hint.ahcich.4.pm_level=5
  hint.ahcich.5.pm_level=5
  hint.ahcich.6.pm_level=5
  hint.ahcich.7.pm_level=5

# GELI THREADS
  kern.geom.eli.threads=2
  kern.geom.eli.batch=1

The /etc/rc.conf file.

# NETWORK
  hostname=offsite.local
  background_dhclient=YES
  extra_netfs_types=NFS
  defaultroute_delay=3
  defaultroute_carrier_delay=3

# MODULES/COMMON/BASE
  kld_list="${kld_list} aesni geom_eli"
  kld_list="${kld_list} fuse coretemp sem cpuctl ichsmb cc_htcp"
  kld_list="${kld_list} libiconv cd9660_iconv msdosfs_iconv udf_iconv"

# POWER
  performance_cx_lowest=C1
  economy_cx_lowest=Cmax
  powerd_enable=YES
  powerd_flags="-n adaptive -a hiadaptive -b adaptive -m 400 -M 1200"

# DAEMONS | yes
  zfs_enable=YES
  nfs_client_enable=YES
  syslogd_flags='-s -s'
  sshd_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# FS
  fsck_y_enable=YES
  clear_tmp_enable=YES
  clear_tmp_X=YES
  growfs_enable=YES

# OTHER
  keyrate=fast
  font8x14=vgarom-8x14
  virecover_enable=NO
  update_motd=NO
  devfs_system_ruleset=desktop
  hostid_enable=NO

USB Boot Drive

I was not sure if I should use USB 2.0 drive or USB 3.0 drive for FreeBSD system so I got both versions from SanDisk and tested their performance with pv(1) and diskinfo(8) tools. The pv(1) utility had options enabled shown below and for diskinfo(8) the -c and -i parameters were used.

% which pv
pv: aliased to pv -t -r -a -b -W -B 1048576

The dmesg(8) information for the SanDisk Fit USB 2.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530001100609104091
da0: 40.000MB/s transfers
da0: 15060MB (30842880 512 byte sectors)
da0: quirks=0x2

The dmesg(8) information for the SanDisk Fit USB 3.0 16GB drive.

# dmesg | tail -6
da0 at umass-sim0 bus 0 scbus3 target 0 lun 0
da0:  Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530 001070202100093
da0: 40.000MB/s transfers
da0: 14663MB (30031250 512 byte sectors)
da0: quirks=0x2

There is also noticeable size difference as the USB 2.0 version has additional 400 MB of space!

By the way … the SanDisk Fit USB 3.0 16GB came with this sticker inside the box – a serial number for the RescuePRO Deluxe software – which I will never use. Not because its bad or something but because I have no such needs. You may take it … of course unless someone else did not took it already πŸ™‚

silent-backup-license.jpg

Below are the results of the benchmarks, I tested them in both USB 2.0 and USB 3.0 ports.


                   DRIVE  USB  pv/READ  pv/WRITE  diskinfo/OVERHEAD  diskinfo/IOPS
SanDisk Fit USB 2.0 16GB  2.0   29MB/s     5MB/s   0.712msec/sector           2521
SanDisk Fit USB 2.0 16GB  3.0   33MB/s     5MB/s   0.799msec/sector           2441
SanDisk Fit USB 3.0 16GB  2.0   35MB/s     9MB/s   0.618msec/sector           1920
SanDisk Fit USB 3.0 16GB  3.0   91MB/s    11MB/s   0.567msec/sector           1588

What is also interesting is that while USB 2.0 version has lower throughput it has more IOPS then the newer USB 3.0 incarnation of the SanDisk Fit drive. I also did other more real life test. I checked how long would it take to boot FreeBSD system installed on each of them from the loader(8) screen to the login: prompt. The difference is 5 seconds. Details are shown below.

 TIME  DRIVE
  28s  SanDisk Fit USB 3.0 16GB
  33s  SanDisk Fit USB 2.0 16GB

With such small ~15% difference I will use SanDisk Fit USB 2.0 16GB as it sticks out little less outside from the slot as shown below.

silent-backup-usb-drives.jpg

Cloud Storage Prices Comparison

The Tarsnap“online backups for the truly paranoid” – costs $0.25/GB/month. The price in Tarsnap is for data transmitted after deduplication and compression but that does not change much here. For my data the compressratio property from ZFS dataset is at 3% (1.03). When I estimate deduplication savings with zdb -S pool command I get additional 1% of the savings (1.01). Lets assume that with both deduplication and compression it would take 5% (1.05) savings. That would lower the Tarsnap price to $0.2375/GB/month.

The Backblaze B2 Cloud Storage – storage costs $0.005/GB/month.

Our single 4TB disk solution costs $230 for lets say 3 years. You can expect disk failure after that period but it may serve you as well for another 3 years. Now as we know the cloud storage prices lets calculate price for 4TB data stored for 3 years in these cloud services.

Self Solution Electricity Cost

We also need to calculate how much energy our build solution would consume. Currently 1kWh of power costs about $0.20 in Europe/Poland (rounded up). This means that running computer with 1000W power usage for 1 hour would cost you $0.20 on electricity bill. Our solution idles at 8.5W and uses 13.9W when fully loaded. It will be idle for most of the time so I will assume that it will use 10W on average here. That would cost us $0.002 for 10W device running for 1 hour.

Below you will also find calculations for 1 day (24x multiplier), 1 year (another 365.25x multiplier) and 3 years (another 3x multiplier).

  COST  TIME
$0.002  1 HOUR
$0.048  1 DAY
$17.53  1 YEAR
$52.60  3 YEARS

Our total 3 years electricity cost is $282.60 for building and then running the system non-stop. We can also implement features like Wake On LAN to limit that power usage even more for example.

Here are these cloud storage service providers prices.


PROVIDER     PRICE  DATA  TIME
Tarsnap    $0.2375   1GB  1 Month
Backblaze  $0.0050   1GB  1 Month

The price for 1 month of keeping 4TB of data on these providers looks as follows.


PROVIDER   PRICE  DATA  TIME
Tarsnap     $973   4TB  1 Month
Backblaze    $20   4TB  1 Month

For just 1 month the Tarsnap is 4 TIMES more expensive the keeping the backup on your self computer with 4TB disk. The Backblaze service is at 1/10 cost which is still reasonable.

Lets compare prices for 3 years of 4TB storage.


PROVIDER    PRICE  DATA  TIME
Tarsnap    $35021   4TB  3 Years
Backblaze    $737   4TB  3 Years

After 3 years the Backblaze solutions is about 2.5 TIMES more expensive then our personal setup, but if you really do not want to create your solution the difference for 3 years is not that big. The Tarsnap is out of bounds here being more then 120 TIMES more expensive then self hosted solution. Remember that I also did not included costs for transferring the data into or from the cloud storage. That would make cloud storage costs even bigger depending how often you would want to pull/push your data.

EOF

IBM TSM (Spectrum Protect) on Veritas Cluster Server

Until today I mostly shared articles about free and open systems. Now its time to share so called enterprise experience πŸ™‚ Not so long ago I made a IBM TSM instance as highly available service on Symantec Veritas Cluster Server.

ibm-tsm-logo.png

If you prefer to use open and free backup solution then check Bareos Backup Server on FreeBSD article.

The IBM TSM (Tivoli Storage Manager) has been rebranded by IBM into IBM Spectrum Protect and in the similar period of time Symantec moved Veritas Cluster Server info InfoScale Availability while creating separate/dedicated Veritas company for this purpose.

The instructions I want to share today are for sure the same for latest versions of Veritas Cluster Server and its later InfoScale Availability incarnations and latest IBM Spectrum Protect 8.1 family introduction was mostly related to rebranding/cleaning of the whole Spectrum Protect/TSM modules and additions, so they all will have common 8.1 label. As these instructions were made for IBM TSM (Spectrum Protect) 7.1.6 version they should still be very similar for current versions.

This highly available IBM TSM instance is part of the whole Backup Consolidation project which uses two physical servers to server both this IBM TSM service and Dell/EMC Networker backup server. When everything is OK then one of the nodes is dedicated to IBM TSM and the other one is used by Dell/EMC Networker, so all physical resources are well saturated and we do not ‘waste’ whole node to wait for 99% of the time empty for the first node to crash. Of course if first node misbehaves or has a hardware failure, then both IBM TSM and Dell/EMC Networker run nicely on single node. It is also very convenient for various maintenance tasks, to be able to switch all services to other node and and work in peace on the first one, but I do not have to tell you that. The third and last service is shared between these two Oracle RMAN Catalog for the Oracle databases metadata information – also for backup/restore purposes.

I will not write here instructions to install the operating system (we use amd64 RHEL 6.x here) or to setup the Veritas Cluster Server as I installed it earlier and its quite simple to set it up. These instructions focus on creating IBM TSM highly available service along using/allocating the resources from the IBM Storwize V5030 storage array where 400 GB SSD disks are dedicated for IBM TSM DB2 database instance and 1.8 TB 10K SAS disks are dedicated for DRAID groups that will be serving space for IBM TSM storage pools implemented in latest IBM TSM container pools with deduplication and compression enabled. The head of IBM Storwize V5030 storage array is shown below.

ibm-tsm-v5030-photo.jpg

Each node is IBM System x3650 M4 server with two dual-port 8Gb FC cards and one dual-port 10GE cards … along with builtin 1GE cards for Veritas Cluster Server heartbeats. Each has 192 GB RAM and dual 6-core CPUs @ 3.5 GHz each which translates to 12 physical cores or 24 HTT threads per node. The three internal SSD drives are used for the system only in RAID1 + SPARE configuration. All clustered resources are from IBM Storwize V5030 FC/SAN storage array. The operating system installed on these nodes is amd64 RHEL 6.x and the Veritas Cluster Server is at 6.2.x version. The IBM System x3650 M4 server is shown below.

ibm-tsm-x3650-m4.jpg

All of the setting/tuning/decisions were made based on the IBM TSM documentation and great IBM Spectrum Protect Blueprints resources from the valuable IBM developerWorks wiki.

Storage Array Setup

First we need to create MDISKS. We used DRAID with double parity protection + spare for each MDISK with 17 SAS 1.8 TB 10K disks each. That gives 14 disks for data 2 for parity and 1 spare from which all provide I/O thanks to DRAID setup. We have three such MDISKs with ~21.7 TB each for the total 65.1 TB for IBM TSM containers. Of course all these 3 ‘pool’ MDISKs are in one Storage Group. The LUNs for the IBM TSM DB2 database were 5 SSD 400 GB disks setup in a DRAID disk with 1 parity and 1 spare disk. This gives 3 disks for data 1 for parity and 1 for spare space. This gives about 1.1 TB for the IBM TSM DB2 database.

Here are LUNs created from these MDISKs.

ibm-tsm-v5030.png

I needed to remove some names of course πŸ™‚

LUNs Initialization

Veritas Service Cluster needs to have storage prepared with disk groups which are similar in concept (but more powerful) then LVM. Below are instructions to first detect and then initialize these LUNs from IBM Storwize V5030 storage array. I marked them in blue for more clarity.

[root@300 ~]# haconf -makerw
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:none       -            -            online invalid
storwizev70000_00001b auto:none       -            -            online invalid
storwizev70000_00001c auto:none       -            -            online invalid
storwizev70000_00001d auto:none       -            -            online invalid
storwizev70000_00001e auto:none       -            -            online invalid
storwizev70000_00001f auto:none       -            -            online invalid
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:none       -            -            online invalid
storwizev70000_000013 auto:none       -            -            online invalid
storwizev70000_000014 auto:none       -            -            online invalid
storwizev70000_000015 auto:none       -            -            online invalid
storwizev70000_000016 auto:none       -            -            online invalid
storwizev70000_000017 auto:none       -            -            online invalid
storwizev70000_000018 auto:none       -            -            online invalid
storwizev70000_000019 auto:none       -            -            online invalid
storwizev70000_000020 auto:none       -            -            online invalid
[root@300 ~]# vxdisksetup -i storwizev70000_00001a
[root@300 ~]# vxdisksetup -i storwizev70000_00001b
[root@300 ~]# vxdisksetup -i storwizev70000_00001c
[root@300 ~]# vxdisksetup -i storwizev70000_00001d
[root@300 ~]# vxdisksetup -i storwizev70000_00001e
[root@300 ~]# vxdisksetup -i storwizev70000_00001f
[root@300 ~]# vxdisksetup -i storwizev70000_000012
[root@300 ~]# vxdisksetup -i storwizev70000_000013
[root@300 ~]# vxdisksetup -i storwizev70000_000014
[root@300 ~]# vxdisksetup -i storwizev70000_000015
[root@300 ~]# vxdisksetup -i storwizev70000_000016
[root@300 ~]# vxdisksetup -i storwizev70000_000017
[root@300 ~]# vxdisksetup -i storwizev70000_000018
[root@300 ~]# vxdisksetup -i storwizev70000_000019
[root@300 ~]# vxdisksetup -i storwizev70000_000020
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    -            -            online
storwizev70000_00001b auto:cdsdisk    -            -            online
storwizev70000_00001c auto:cdsdisk    -            -            online
storwizev70000_00001d auto:cdsdisk    -            -            online
storwizev70000_00001e auto:cdsdisk    -            -            online
storwizev70000_00001f auto:cdsdisk    -            -            online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    -            -            online
storwizev70000_000013 auto:cdsdisk    -            -            online
storwizev70000_000014 auto:cdsdisk    -            -            online
storwizev70000_000015 auto:cdsdisk    -            -            online
storwizev70000_000016 auto:cdsdisk    -            -            online
storwizev70000_000017 auto:cdsdisk    -            -            online
storwizev70000_000018 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000019 auto:cdsdisk    -            -            online
storwizev70000_000020 auto:cdsdisk    -            -            online
[root@300 ~]# vxdg init TSM0_dg \
                stgFC_020=storwizev70000_000020 \
                stgFC_012=storwizev70000_000012 \
                stgFC_016=storwizev70000_000016 \
                stgFC_013=storwizev70000_000013 \
                stgFC_014=storwizev70000_000014 \
                stgFC_015=storwizev70000_000015 \
                stgFC_017=storwizev70000_000017 \
                stgFC_018=storwizev70000_000018 \
                stgFC_019=storwizev70000_000019 \
                stgFC_01A=storwizev70000_00001a \
                stgFC_01B=storwizev70000_00001b \
                stgFC_01C=storwizev70000_00001c \
                stgFC_01D=storwizev70000_00001d \
                stgFC_01E=storwizev70000_00001e \
                stgFC_01F=storwizev70000_00001f
[root@300 ~]# vxdisk -o alldgs list
DEVICE                TYPE            DISK         GROUP        STATUS
disk_0                auto:LVM        -            -            online invalid
storwizev70000_00000a auto:cdsdisk    -            (dg_fencing) online
storwizev70000_00000b auto:cdsdisk    stgFC_00B    NSR_dg_nsr   online
storwizev70000_00000c auto:cdsdisk    stgFC_00C    NSR_dg_nsr   online
storwizev70000_00000d auto:cdsdisk    stgFC_00D    NSR_dg_nsr   online
storwizev70000_00000e auto:cdsdisk    stgFC_00E    NSR_dg_nsr   online
storwizev70000_00000f auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_00001a auto:cdsdisk    stgFC_01A    TSM0_dg      online
storwizev70000_00001b auto:cdsdisk    stgFC_01B    TSM0_dg      online
storwizev70000_00001c auto:cdsdisk    stgFC_01C    TSM0_dg      online
storwizev70000_00001d auto:cdsdisk    stgFC_01D    TSM0_dg      online
storwizev70000_00001e auto:cdsdisk    stgFC_01E    TSM0_dg      online
storwizev70000_00001f auto:cdsdisk    stgFC_01F    TSM0_dg      online
storwizev70000_000008 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000009 auto:cdsdisk    -            (dg_fencing) online
storwizev70000_000010 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000011 auto:cdsdisk    -            (RMAN_dg)    online
storwizev70000_000012 auto:cdsdisk    stgFC_012    TSM0_dg      online
storwizev70000_000013 auto:cdsdisk    stgFC_013    TSM0_dg      online
storwizev70000_000014 auto:cdsdisk    stgFC_014    TSM0_dg      online
storwizev70000_000015 auto:cdsdisk    stgFC_015    TSM0_dg      online
storwizev70000_000016 auto:cdsdisk    stgFC_016    TSM0_dg      online
storwizev70000_000017 auto:cdsdisk    stgFC_017    TSM0_dg      online
storwizev70000_000018 auto:cdsdisk    stgFC_018    TSM0_dg      online
storwizev70000_000019 auto:cdsdisk    stgFC_019    TSM0_dg      online
storwizev70000_000020 auto:cdsdisk    stgFC_020    TSM0_dg      online
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_instance     maxsize=32G   stgFC_020
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_active_log   maxsize=128G  stgFC_012
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_archive_log  maxsize=384G  stgFC_016
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_01        maxsize=300G  stgFC_013
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_02        maxsize=300G  stgFC_014
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_03        maxsize=300G  stgFC_015
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_01 maxsize=900G  stgFC_017
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_02 maxsize=900G  stgFC_018
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_03 maxsize=900G  stgFC_019
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_01     maxsize=6700G stgFC_01A
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_02     maxsize=6700G stgFC_01B
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_03     maxsize=6700G stgFC_01C
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_04     maxsize=6700G stgFC_01D
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_05     maxsize=6700G stgFC_01E
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_06     maxsize=6700G stgFC_01F
[root@300 ~]# vxprint -u h | grep ^sd | column -t
sd  stgFC_00B-01  NSR_vol_index-02          ENABLED  399.95g  0.00  -  -  -
sd  stgFC_00C-01  NSR_vol_media-02          ENABLED  9.96g    0.00  -  -  -
sd  stgFC_00D-01  NSR_vol_nsr-02            ENABLED  79.96g   0.00  -  -  -
sd  stgFC_00E-01  NSR_vol_res-02            ENABLED  9.96g    0.00  -  -  -
sd  stgFC_012-01  TSM0_vol_active_log-01    ENABLED  127.96g  0.00  -  -  -
sd  stgFC_016-01  TSM0_vol_archive_log-01   ENABLED  383.95g  0.00  -  -  -
sd  stgFC_017-01  TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_018-01  TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_019-01  TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00  -  -  -
sd  stgFC_013-01  TSM0_vol_db_01-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_014-01  TSM0_vol_db_02-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_015-01  TSM0_vol_db_03-01         ENABLED  299.95g  0.00  -  -  -
sd  stgFC_020-01  TSM0_vol_instance-01      ENABLED  31.96g   0.00  -  -  -
sd  stgFC_01A-01  TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01B-01  TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01C-01  TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01D-01  TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01E-01  TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00  -  -  -
sd  stgFC_01F-01  TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00  -  -  -
[root@300 ~]# vxprint -u h -g TSM0_dg | column -t
TY  NAME                      ASSOC                     KSTATE   LENGTH   PLOFFS  STATE   TUTIL0  PUTIL0
dg  TSM0_dg                   TSM0_dg                   -        -        -       -       -       -
dm  stgFC_01A                 storwizev70000_00001a     -        6.54t    -       -       -       -
dm  stgFC_01B                 storwizev70000_00001b     -        6.54t    -       -       -       -
dm  stgFC_01C                 storwizev70000_00001c     -        6.54t    -       -       -       -
dm  stgFC_01D                 storwizev70000_00001d     -        6.54t    -       -       -       -
dm  stgFC_01E                 storwizev70000_00001e     -        6.54t    -       -       -       -
dm  stgFC_01F                 storwizev70000_00001f     -        6.54t    -       -       -       -
dm  stgFC_012                 storwizev70000_000012     -        127.96g  -       -       -       -
dm  stgFC_013                 storwizev70000_000013     -        299.95g  -       -       -       -
dm  stgFC_014                 storwizev70000_000014     -        299.95g  -       -       -       -
dm  stgFC_015                 storwizev70000_000015     -        299.95g  -       -       -       -
dm  stgFC_016                 storwizev70000_000016     -        383.95g  -       -       -       -
dm  stgFC_017                 storwizev70000_000017     -        899.93g  -       -       -       -
dm  stgFC_018                 storwizev70000_000018     -        899.93g  -       -       -       -
dm  stgFC_019                 storwizev70000_000019     -        899.93g  -       -       -       -
dm  stgFC_020                 storwizev70000_000020     -        31.96g   -       -       -       -

v   TSM0_vol_active_log       fsgen                     ENABLED  127.96g  -       ACTIVE  -       -
pl  TSM0_vol_active_log-01    TSM0_vol_active_log       ENABLED  127.96g  -       ACTIVE  -       -
sd  stgFC_012-01              TSM0_vol_active_log-01    ENABLED  127.96g  0.00    -       -       -

v   TSM0_vol_archive_log      fsgen                     ENABLED  383.95g  -       ACTIVE  -       -
pl  TSM0_vol_archive_log-01   TSM0_vol_archive_log      ENABLED  383.95g  -       ACTIVE  -       -
sd  stgFC_016-01              TSM0_vol_archive_log-01   ENABLED  383.95g  0.00    -       -       -

v   TSM0_vol_db_backup_01     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_01-01  TSM0_vol_db_backup_01     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_017-01              TSM0_vol_db_backup_01-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_02     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_02-01  TSM0_vol_db_backup_02     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_018-01              TSM0_vol_db_backup_02-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_backup_03     fsgen                     ENABLED  899.93g  -       ACTIVE  -       -
pl  TSM0_vol_db_backup_03-01  TSM0_vol_db_backup_03     ENABLED  899.93g  -       ACTIVE  -       -
sd  stgFC_019-01              TSM0_vol_db_backup_03-01  ENABLED  899.93g  0.00    -       -       -

v   TSM0_vol_db_01            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_01-01         TSM0_vol_db_01            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_013-01              TSM0_vol_db_01-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_02            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_02-01         TSM0_vol_db_02            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_014-01              TSM0_vol_db_02-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_db_03            fsgen                     ENABLED  299.95g  -       ACTIVE  -       -
pl  TSM0_vol_db_03-01         TSM0_vol_db_03            ENABLED  299.95g  -       ACTIVE  -       -
sd  stgFC_015-01              TSM0_vol_db_03-01         ENABLED  299.95g  0.00    -       -       -

v   TSM0_vol_instance         fsgen                     ENABLED  31.96g   -       ACTIVE  -       -
pl  TSM0_vol_instance-01      TSM0_vol_instance         ENABLED  31.96g   -       ACTIVE  -       -
sd  stgFC_020-01              TSM0_vol_instance-01      ENABLED  31.96g   0.00    -       -       -

v   TSM0_vol_pool0_01         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_01-01      TSM0_vol_pool0_01         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01A-01              TSM0_vol_pool0_01-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_02         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_02-01      TSM0_vol_pool0_02         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01B-01              TSM0_vol_pool0_02-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_03         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_03-01      TSM0_vol_pool0_03         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01C-01              TSM0_vol_pool0_03-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_04         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_04-01      TSM0_vol_pool0_04         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01D-01              TSM0_vol_pool0_04-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_05         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_05-01      TSM0_vol_pool0_05         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01E-01              TSM0_vol_pool0_05-01      ENABLED  6.54t    0.00    -       -       -

v   TSM0_vol_pool0_06         fsgen                     ENABLED  6.54t    -       ACTIVE  -       -
pl  TSM0_vol_pool0_06-01      TSM0_vol_pool0_06         ENABLED  6.54t    -       ACTIVE  -       -
sd  stgFC_01F-01              TSM0_vol_pool0_06-01      ENABLED  6.54t    0.00    -       -       -
[root@300 ~]# vxinfo -p -g TSM0_dg | column -t
vol   TSM0_vol_instance         fsgen   Started
plex  TSM0_vol_instance-01      ACTIVE
vol   TSM0_vol_active_log       fsgen   Started
plex  TSM0_vol_active_log-01    ACTIVE
vol   TSM0_vol_archive_log      fsgen   Started
plex  TSM0_vol_archive_log-01   ACTIVE
vol   TSM0_vol_db_01            fsgen   Started
plex  TSM0_vol_db_01-01         ACTIVE
vol   TSM0_vol_db_02            fsgen   Started
plex  TSM0_vol_db_02-01         ACTIVE
vol   TSM0_vol_db_03            fsgen   Started
plex  TSM0_vol_db_03-01         ACTIVE
vol   TSM0_vol_db_backup_01     fsgen   Started
plex  TSM0_vol_db_backup_01-01  ACTIVE
vol   TSM0_vol_db_backup_02     fsgen   Started
plex  TSM0_vol_db_backup_02-01  ACTIVE
vol   TSM0_vol_db_backup_03     fsgen   Started
plex  TSM0_vol_db_backup_03-01  ACTIVE
vol   TSM0_vol_pool0_01         fsgen   Started
plex  TSM0_vol_pool0_01-01      ACTIVE
vol   TSM0_vol_pool0_02         fsgen   Started
plex  TSM0_vol_pool0_02-01      ACTIVE
vol   TSM0_vol_pool0_03         fsgen   Started
plex  TSM0_vol_pool0_03-01      ACTIVE
vol   TSM0_vol_pool0_04         fsgen   Started
plex  TSM0_vol_pool0_04-01      ACTIVE
vol   TSM0_vol_pool0_05         fsgen   Started
plex  TSM0_vol_pool0_05-01      ACTIVE
vol   TSM0_vol_pool0_06         fsgen   Started
plex  TSM0_vol_pool0_06-01      ACTIVE
[root@300 ~]# find /dev/vx/dsk -name TSM0_\*
/dev/vx/dsk/TSM0_dg
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_06     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_05     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_04     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_03     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_02     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_01     &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_03        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_02        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_01        &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_archive_log  &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_active_log   &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_instance     &

[root@300 ~]# haconf -dump -makero

Veritas Cluster Server Group

Now as we have LUNs initialized into Disk Group we may now create the cluster service.

[root@300 ~]# haconf -makerw
[root@300 ~]# hagrp -add TSM0_site
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
[root@300 ~]# hagrp -modify TSM0_site SystemList 300 0 301 1
[root@300 ~]# hagrp -modify TSM0_site AutoStartList 300 301
[root@300 ~]# hagrp -modify TSM0_site Parallel 0
[root@300 ~]# hares -add    TSM0_nic_bond0 NIC TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_nic_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_nic_bond0 PingOptimize 1
[root@300 ~]# hares -modify TSM0_nic_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_nic_bond0 Enabled 1
[root@300 ~]# hares -probe  TSM0_nic_bond0 -sys 301
[root@300 ~]# hares -add    TSM0_ip_bond0 IP TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_ip_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_ip_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_ip_bond0 Address 10.20.30.44
[root@300 ~]# hares -modify TSM0_ip_bond0 NetMask 255.255.255.0
[root@300 ~]# hares -modify TSM0_ip_bond0 Enabled 1
[root@300 ~]# hares -link   TSM0_ip_bond0 TSM0_nic_bond0
[root@300 ~]# hares -add    TSM0_dg DiskGroup TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_dg Critical 1
[root@300 ~]# hares -modify TSM0_dg DiskGroup TSM0_dg
[root@300 ~]# hares -modify TSM0_dg Enabled 1
[root@300 ~]# hares -probe  TSM0_dg -sys 301
[root@300 ~]# mkdir /tsm0
[root@301 ~]# mkdir /tsm0

I did not wanted to type all these over and over again so I generated these commands as shown below.

[LOCAL] % cat > LIST << __EOF
stgFC_020    32  /tsm0                         TSM0_vol_instance      TSM0_mnt_instance
stgFC_012   128  /tsm0/active_log              TSM0_vol_active_log    TSM0_mnt_active_log
stgFC_016   384  /tsm0/archive_log             TSM0_vol_archive_log   TSM0_mnt_archive_log
stgFC_013   300  /tsm0/db/db_01                TSM0_vol_db_01         TSM0_mnt_db_01
stgFC_014   300  /tsm0/db/db_02                TSM0_vol_db_02         TSM0_mnt_db_02
stgFC_015   300  /tsm0/db/db_03                TSM0_vol_db_03         TSM0_mnt_db_03
stgFC_017   900  /tsm0/db_backup/db_backup_01  TSM0_vol_db_backup_01  TSM0_mnt_db_backup_01
stgFC_018   900  /tsm0/db_backup/db_backup_02  TSM0_vol_db_backup_02  TSM0_mnt_db_backup_02
stgFC_019   900  /tsm0/db_backup/db_backup_03  TSM0_vol_db_backup_03  TSM0_mnt_db_backup_03
stgFC_01A  6700  /tsm0/pool0/pool0_01          TSM0_vol_pool0_01      TSM0_mnt_pool0_01
stgFC_01B  6700  /tsm0/pool0/pool0_02          TSM0_vol_pool0_02      TSM0_mnt_pool0_02
stgFC_01C  6700  /tsm0/pool0/pool0_03          TSM0_vol_pool0_03      TSM0_mnt_pool0_03
stgFC_01D  6700  /tsm0/pool0/pool0_04          TSM0_vol_pool0_04      TSM0_mnt_pool0_04
stgFC_01E  6700  /tsm0/pool0/pool0_05          TSM0_vol_pool0_05      TSM0_mnt_pool0_05
stgFC_01F  6700  /tsm0/pool0/pool0_06          TSM0_vol_pool0_06      TSM0_mnt_pool0_06
__EOF
[LOCAL]# cat LIST \
  | while read STG SIZE MNTPOINT VOL MNTNAME
    do
      echo sleep 0.2; echo hares -add    ${MNTNAME} Mount TSM0_site
      echo sleep 0.2; echo hares -modify ${MNTNAME} Critical 1
      echo sleep 0.2; echo hares -modify ${MNTNAME} SnapUmount 0
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountPoint ${MNTPOINT}
      echo sleep 0.2; echo hares -modify ${MNTNAME} BlockDevice /dev/vx/dsk/TSM0_dg/${VOL}
      echo sleep 0.2; echo hares -modify ${MNTNAME} FSType vxfs
      echo sleep 0.2; echo hares -modify ${MNTNAME} MountOpt largefiles
      echo sleep 0.2; echo hares -modify ${MNTNAME} FsckOpt %-y
      echo sleep 0.2; echo hares -modify ${MNTNAME} Enabled 1
      echo sleep 0.2; echo hares -probe  ${MNTNAME} -sys 301
      echo sleep 0.2; echo hares -link   ${MNTNAME} TSM0_dg
      echo
    done
[root@300 ~]# hares -add    TSM0_mnt_instance Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_instance Critical 1
[root@300 ~]# hares -modify TSM0_mnt_instance SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_instance MountPoint /tsm0
[root@300 ~]# hares -modify TSM0_mnt_instance BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# hares -modify TSM0_mnt_instance FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_instance MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_instance FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_instance Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_instance -sys 301
[root@300 ~]# hares -link   TSM0_mnt_instance TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_active_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_active_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_active_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_active_log MountPoint /tsm0/active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_active_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_active_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_active_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_active_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_active_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_archive_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_archive_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_archive_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountPoint /tsm0/archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_archive_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_archive_log Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_archive_log -sys 301
[root@300 ~]# hares -link   TSM0_mnt_archive_log TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountPoint /tsm0/db/db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountPoint /tsm0/db/db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountPoint /tsm0/db/db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountPoint /tsm0/db_backup/db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountPoint /tsm0/db_backup/db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_db_backup_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountPoint /tsm0/db_backup/db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_db_backup_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountPoint /tsm0/pool0/pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_01 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_01 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountPoint /tsm0/pool0/pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_02 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_02 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountPoint /tsm0/pool0/pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_03 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_03 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_04 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountPoint /tsm0/pool0/pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_04 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_04 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_05 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountPoint /tsm0/pool0/pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_05 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_05 TSM0_dg
[root@300 ~]# hares -add    TSM0_mnt_pool0_06 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountPoint /tsm0/pool0/pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Enabled 1
[root@300 ~]# hares -probe  TSM0_mnt_pool0_06 -sys 301
[root@300 ~]# hares -link   TSM0_mnt_pool0_06 TSM0_dg
[root@300 ~]# hares -state | grep TSM0 | grep _mnt_ | \
                while read I; do hares -display $I 2>&1 | grep -v ArgListValues | grep 'largefiles'; done | column -t
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_active_log    MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_archive_log   MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_01         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_02         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_03         MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_01  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_02  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_db_backup_03  MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_instance      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_01      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_02      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_03      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_04      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_05      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
TSM0_mnt_pool0_06      MountOpt  localclus  largefiles
[root@300 ~]# hares -add    TSM0_server Application TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_server StartProgram   "/etc/init.d/tsm0 start"
[root@300 ~]# hares -modify TSM0_server StopProgram    "/etc/init.d/tsm0 stop"
[root@300 ~]# hares -modify TSM0_server MonitorProgram "/etc/init.d/tsm0 status"
[root@300 ~]# hares -modify TSM0_server Enabled 1
[root@300 ~]# hares -probe  TSM0_server -sys 301
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_active_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_archive_log
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_db_backup_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_01
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_02
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_03
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_04
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_05
[root@300 ~]# hares -link   TSM0_server           TSM0_mnt_pool0_06
[root@300 ~]# hares -link   TSM0_server           TSM0_ip_bond0
[root@300 ~]# hares -link   TSM0_mnt_active_log   TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_archive_log  TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_01        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_02        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_03        TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_01 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_02 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_db_backup_03 TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_01     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_02     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_03     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_04     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_05     TSM0_mnt_instance
[root@300 ~]# hares -link   TSM0_mnt_pool0_06     TSM0_mnt_instance
[root@300 ~]# vxdg import TSM0_dg
[root@300 ~]# mount -t vxfs /dev/vx/dsk/TSM0_dg/TSM0_vol_instance /tsm0
[root@301 ~]# mkdir -p /tsm0/active_log
[root@301 ~]# mkdir -p /tsm0/archive_log
[root@300 ~]# mkdir -p /tsm0/db/db_01
[root@300 ~]# mkdir -p /tsm0/db/db_02
[root@300 ~]# mkdir -p /tsm0/db/db_03
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_01
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_02
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_01
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_02
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_04
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_05
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_06
[root@300 ~]# find /tsm0
/tsm0
/tsm0/lost+found
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# umount /tsm0
[root@300 ~]# vxdg deport TSM0_dg
[root@300 ~]# haconf -dump -makero
[root@300 ~]# grep TSM0_server /etc/VRTSvcs/conf/config/main.cf
        Application TSM0_server (
        TSM0_server requires TSM0_ip_bond0
        TSM0_server requires TSM0_mnt_active_log
        TSM0_server requires TSM0_mnt_archive_log
        TSM0_server requires TSM0_mnt_db_01
        TSM0_server requires TSM0_mnt_db_02
        TSM0_server requires TSM0_mnt_db_03
        TSM0_server requires TSM0_mnt_db_backup_01
        TSM0_server requires TSM0_mnt_db_backup_02
        TSM0_server requires TSM0_mnt_db_backup_03
        TSM0_server requires TSM0_mnt_instance
        TSM0_server requires TSM0_mnt_pool0_01
        TSM0_server requires TSM0_mnt_pool0_02
        TSM0_server requires TSM0_mnt_pool0_03
        TSM0_server requires TSM0_mnt_pool0_04
        TSM0_server requires TSM0_mnt_pool0_05
        TSM0_server requires TSM0_mnt_pool0_06
        //      Application TSM0_server

Local Per Node Resources

[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# lvcreate -n lv_tmp        -L  4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home       -L  4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# cat /etc/fstab
/dev/mapper/vg_local-lv_root              /           ext3 rw,noatime,nodiratime      1 1
UUID=28d0988a-e6d7-48d8-b0e5-0f70f8eb681e /boot       ext3 defaults                   1 2
UUID=D401-661A                            /boot/efi   vfat umask=0077,shortname=winnt 0 0
/dev/vg_local/lv_swap                     swap        swap defaults                   0 0
/dev/vg_local/lv_tmp                      /tmp        ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_opt_tivoli               /opt/tivoli ext3 rw,noatime,nodiratime      2 2
/dev/vg_local/lv_home                     /home       ext3 rw,noatime,nodiratime      2 2

# VIRT
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Install IBM TSM Server Dependencies.

[root@ANY ~]# yum install numactl
[root@ANY ~]# yum install /usr/lib/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install /usr/lib64/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install xorg-x11-xauth xterm fontconfig libICE \
                          libX11-common libXau libXmu libSM libX11 libXt

System /etc/sysctl.conf parameters for both nodes.

[root@300 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0
[root@301 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# For SF HA
kernel.hung_task_panic=0

# NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192

# increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216

# recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600

# recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64

# for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744

# support EMC 2016.04.20
net.core.somaxconn = 1024

# 256 * RAM in GB
kernel.shmmni = 65536

# TSM/NSR
kernel.sem = 250 256000 32 65536

# RAM in GB * 1024
kernel.msgmni = 262144

# TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0

Install IBM TSM Server

Connect to each node with SSH Forwarding enabled and install IBM TSM server.

[root@300 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./install.sh

… and the second node.

[root@301 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./install.sh

Options choosen during installation.

INSTALL | DESELECT 'Languages' and DESELECT 'Operations Center'
INSTALL | /opt/tivoli/IBM/IBMIMShared
INSTALL | /opt/tivoli/IBM/InstallationManager/eclipse
INSTALL | /opt/tivoli/tsm

Screenshots from the installation process.

ibm-tsm-install-01

ibm-tsm-install-02

ibm-tsm-install-03

ibm-tsm-install-04

ibm-tsm-install-05

ibm-tsm-install-06

Install IBM TSM Client

[root@300 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm
[root@301 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \
                               gskssl64-8.0.50.66.linux.x86_64.rpm \
                               TIVsm-API64.x86_64.rpm \
                               TIVsm-BA.x86_64.rpm

Nodes Configuration for IBM TSM Server

[root@300 ~]# useradd -u 1500 -m tsm0
[root@301 ~]# useradd -u 1500 -m tsm0
[root@300 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@301 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@300 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash

[root@301 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash
[root@300 ~]# tail -1 /etc/group
tsm0:x:1500:

[root@301 ~]# tail -1 /etc/group
tsm0:x:1500:
[root@300 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768

[root@301 ~]# cat /etc/security/limits.conf
# ORACLE
oracle              soft    nproc   16384
oracle              hard    nproc   16384
oracle              soft    nofile  4096
oracle              hard    nofile  65536
oracle              soft    stack   10240

# TSM
tsm0                soft    nofile  32768
tsm0                hard    nofile  32768
[root@300 ~]# :> /var/run/dsmserv_tsm0.pid
[root@301 ~]# :> /var/run/dsmserv_tsm0.pid
[root@300 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@301 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@300 ~]# hares -state | grep TSM
TSM0_dg               State                 300  OFFLINE
TSM0_dg               State                 301  OFFLINE
TSM0_ip_bond0         State                 300  OFFLINE
TSM0_ip_bond0         State                 301  OFFLINE
TSM0_mnt_active_log   State                 300  OFFLINE
TSM0_mnt_active_log   State                 301  OFFLINE
TSM0_mnt_archive_log  State                 300  OFFLINE
TSM0_mnt_archive_log  State                 301  OFFLINE
TSM0_mnt_db_01        State                 300  OFFLINE
TSM0_mnt_db_01        State                 301  OFFLINE
TSM0_mnt_db_02        State                 300  OFFLINE
TSM0_mnt_db_02        State                 301  OFFLINE
TSM0_mnt_db_03        State                 300  OFFLINE
TSM0_mnt_db_03        State                 301  OFFLINE
TSM0_mnt_db_backup_01 State                 300  OFFLINE
TSM0_mnt_db_backup_01 State                 301  OFFLINE
TSM0_mnt_db_backup_02 State                 300  OFFLINE
TSM0_mnt_db_backup_02 State                 301  OFFLINE
TSM0_mnt_db_backup_03 State                 300  OFFLINE
TSM0_mnt_db_backup_03 State                 301  OFFLINE
TSM0_mnt_instance     State                 300  OFFLINE
TSM0_mnt_instance     State                 301  OFFLINE
TSM0_mnt_pool0_01     State                 300  OFFLINE
TSM0_mnt_pool0_01     State                 301  OFFLINE
TSM0_mnt_pool0_02     State                 300  OFFLINE
TSM0_mnt_pool0_02     State                 301  OFFLINE
TSM0_mnt_pool0_03     State                 300  OFFLINE
TSM0_mnt_pool0_03     State                 301  OFFLINE
TSM0_mnt_pool0_04     State                 300  OFFLINE
TSM0_mnt_pool0_04     State                 301  OFFLINE
TSM0_mnt_pool0_05     State                 300  OFFLINE
TSM0_mnt_pool0_05     State                 301  OFFLINE
TSM0_mnt_pool0_06     State                 300  OFFLINE
TSM0_mnt_pool0_06     State                 301  OFFLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 300  OFFLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# hares -online TSM0_mnt_instance -sys $( hostname -s )
[root@300 ~]# hares -online TSM0_ip_bond0     -sys $( hostname -s )
[root@300 ~]# hares -state | grep TSM0 | grep 301 | grep mnt | grep -v instance | awk '{print $1}' \
                | while read I; do hares -online ${I} -sys $( hostname -s ); done
[root@300 ~]# hares -state | grep 301 | grep TSM0
TSM0_dg               State                 301  ONLINE
TSM0_ip_bond0         State                 301  ONLINE
TSM0_mnt_active_log   State                 301  ONLINE
TSM0_mnt_archive_log  State                 301  ONLINE
TSM0_mnt_db_01        State                 301  ONLINE
TSM0_mnt_db_02        State                 301  ONLINE
TSM0_mnt_db_03        State                 301  ONLINE
TSM0_mnt_db_backup_01 State                 301  ONLINE
TSM0_mnt_db_backup_02 State                 301  ONLINE
TSM0_mnt_db_backup_03 State                 301  ONLINE
TSM0_mnt_instance     State                 301  ONLINE
TSM0_mnt_pool0_01     State                 301  ONLINE
TSM0_mnt_pool0_02     State                 301  ONLINE
TSM0_mnt_pool0_03     State                 301  ONLINE
TSM0_mnt_pool0_04     State                 301  ONLINE
TSM0_mnt_pool0_05     State                 301  ONLINE
TSM0_mnt_pool0_06     State                 301  ONLINE
TSM0_nic_bond0        State                 301  ONLINE
TSM0_server           State                 301  OFFLINE
[root@300 ~]# find /tsm0 | grep -v 'lost+found'
/tsm0
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# chown -R tsm0:tsm0 /tsm0

IBM TSM Server Configuration

Connect to one of the nodes with SSH Forwarding enabled.

[root@300 ~]# cd /opt/tivoli/tsm/server/bin
[root@300 /opt/tivoli/tsm/server/bin]# ./dsmicfgx
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Options choosen during configuration.

INSTALL | Instance user ID:
INSTALL |  Β Β tsm0
INSTALL |
INSTALL | Instance directory:
INSTALL |  Β Β /tsm0
INSTALL |
INSTALL | Database directories:
INSTALL |  Β Β /tsm0/db/db_01
INSTALL |   Β /tsm0/db/db_02
INSTALL |   Β /tsm0/db/db_03
INSTALL |
INSTALL | Active log directory:
INSTALL |  Β Β /tsm0/active_log
INSTALL |
INSTALL | Primary archive log directory:
INSTALL |  Β Β /tsm0/archive_log
INSTALL |
INSTALL | Instance autostart setting:
INSTALL |  Β Β Start automatically using the instance user ID

Screenshots from the configuration process.

ibm-tsm-configure-01

ibm-tsm-configure-02

ibm-tsm-configure-03

ibm-tsm-configure-04

ibm-tsm-configure-05

ibm-tsm-configure-06

ibm-tsm-configure-07

ibm-tsm-configure-08

ibm-tsm-configure-09

Log from the IBM TSM DB2 instance creation.

Creating the database manager instance...
The database manager instance was created successfully.

Formatting the server database...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 5208.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server's database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown.

Format completed with return code 0
Beginning initial configuration...

ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 8741.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1636W The server machine GUID changed: old value (), new value (f0.8a.27.61-
.e5.43.b6.11.92.b5.00.0a.f7.49.31.18).
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server
password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2200I Storage pool BACKUPPOOL defined (device class DISK).
ANR2200I Storage pool ARCHIVEPOOL defined (device class DISK).
ANR2200I Storage pool SPACEMGPOOL defined (device class DISK).
ANR2560I Schedule manager started.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
ANR2094I Server name set to TSM0.
ANR4865W The server name has been changed. Windows clients that use "passworda-
ccess generate" may be unable to authenticate with the server.
ANR2068I Administrator ADMIN registered.
ANR2076I System privilege granted to administrator ADMIN.
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.

Configuration is complete.

Modify IBM TSM Server Startup Script

Modified startup script to properly work with Veritas Cluster Server with modification in blue below.

[root@300 ~]# cat /etc/init.d/tsm0
#!/bin/bash
#
# dsmserv       Start/Stop IBM Tivoli Storage Manager
#
# chkconfig: - 90 10
# description: Starts/Stops an IBM Tivoli Storage Manager Server instance
# processname: dsmserv
# pidfile: /var/run/dsmserv_instancename.pid

#***********************************************************************
# Distributed Storage Manager (ADSM)                                   *
# Server Component                                                     *
#                                                                      *
# IBM Confidential                                                     *
# (IBM Confidential-Restricted when combined with the Aggregated OCO   *
# Source Modules for this Program)                                     *
#                                                                      *
# OCO Source Materials                                                 *
#                                                                      *
# 5765-303 (C) Copyright IBM Corporation 1990, 2009                    *
#***********************************************************************

#
# This init script is designed to start a single Tivoli Storage Manager
# server instance on a system where multiple instances might be running.
# It assumes that the name of the script is also the name of the instance
# to be started (or, if the script name starts with Snn or Knn, where 'n'
# is a digit, that the name of the instance is the script name with the
# three letter prefix removed).
#
# To use the script to start multiple instances, install multiple copies
# of the script in /etc/rc.d/init.d, naming each copy after the instance
# it will start.
#
# The script makes a number of simplifying assumptions about the way
# the instance is set up.
# - The Tivoli Storage Manager Server instance runs as a non-root user whose
#   name is the instance name
# - The server's instance directory (the directory in which it keeps all of
#   its important state information) is in a subdirectory of the home
#   directory called tsminst1.
# If any of these assumptions are not valid, then the script will require
# some modifications to work.  To start with, look at the
# instance, instance_user, and instance_dir variables set below...

# First of all, check for syntax
if [[ $# != 1 ]]
then
  echo $"Usage: $0 {start|stop|status|restart}"
  exit 1
fi

prog="dsmserv"
instance=tsm0
serverBinDir="/opt/tivoli/tsm/server/bin"

if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system ($serverBinDir/$prog)"
   exit -1
fi

# see if $0 starts with Snn or Knn, where 'n' is a digit.  If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${instance:0:1} == S ]]
then
  instance=${instance#S[0123456789][0123456789]}
elif [[ ${instance:0:1} == K ]]
then
  instance=${instance#K[0123456789][0123456789]}
fi

instance_home=`${serverBinDir}/dsmfngr $instance 2>/dev/null`
if [[ -z "$instance_home" ]]
then
  instance_home="/home/${instance}"
fi
instance_user=tsm0
instance_dir=/tsm0
pidfile="/var/run/${prog}_${instance}.pid"

PATH=/sbin:/bin:/usr/bin:/usr/sbin:$serverBinDir

#
# Do some basic error checking before starting the server
#
# Is the server installed?
if [[ ! -e $serverBinDir/$prog ]]
then
   echo "IBM Tivoli Storage Manager Server not found on this system"
   exit 0
fi

# Does the instance directory exist?
if [[ ! -d $instance_dir ]]
then
 echo "Instance directory ${instance_dir} does not exist"
 exit -1
fi
rc=0

SLEEP_INTERVAL=5
MAX_SLEEP_TIME=10

function check_pid_file()
{
    test -f $pidfile
}

function check_process()
{
    ps -p `cat $pidfile` > /dev/null
}

function check_running()
{
    check_pid_file && check_process
}

start() {
        # set the standard value for the user limits
        ulimit -c unlimited
        ulimit -d unlimited
        ulimit -f unlimited
        ulimit -n 65536
        ulimit -t unlimited
        ulimit -u 16384

        echo -n "Starting $prog instance $instance ... "
        #if we're already running, say so
        status 0
        if [[ $g_status == "running" ]]
        then
           echo "$prog instance $instance already running..."
           exit 0
        else
           $serverBinDir/rc.dsmserv -u $instance_user -i $instance_dir -q >/dev/null 2>&1 &
           # give enough time to server to start
           sleep 5
           # if the lock file got created, we did ok
           if [[ -f $instance_dir/dsmserv.v6lock ]]
           then
              gawk --source '{print $4}' $instance_dir/dsmserv.v6lock>$pidfile
              [ $? = 0 ] && echo "Succeeded" || echo "Failed"
              rc=$?
              echo
              [ $rc -eq 0 ] && touch /var/lock/subsys/${instance}
              return $rc
           else
              echo "Failed"
              return 1
           fi
       fi
}

stop() {
        echo  "Stopping $prog instance $instance ..."
        if [[ -e $pidfile ]]
        then
           # make sure someone else didn't kill us already
           progpid=`cat $pidfile`
           running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
           if [[ -n $running ]]
           then
              #echo "executing cmd kill `cat $pidfile`"
              kill `cat $pidfile`

              total_slept=0
              while check_running; do \
                  echo  "$prog instance $instance still running, will check after $SLEEP_INTERVAL seconds"
                  sleep $SLEEP_INTERVAL
                  total_slept=`expr $total_slept + 1`

                  if [ "$total_slept" -gt "$MAX_SLEEP_TIME" ]; then \
                      break
                  fi
              done

              if  check_running
              then
                echo "Unable to stop $prog instance $instance"
                exit 1
              else
                echo "$prog instance $instance stopped Successfully"
              fi
           fi
           # remove the pid file so that we don't try to kill same pid again
           rm $pidfile
           if [[ $? != 0 ]]
           then
              echo "Process $prog instance $instance stopped, but unable to remove $pidfile"
              echo "Be sure to remove $pidfile."
              exit 1
           fi
        else
           echo "$prog instance $instance is not running."
        fi
        rc=$?
        echo
        [ $rc -eq 0 ] && rm -f /var/lock/subsys/${instance}
        return $rc
}

status() {
      # check usage
      if [[ $# != 1 ]]
      then
         echo "$0: Invalid call to status routine. Expected argument: "
         echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
         exit 100
         # exit 1
      fi
      #see if file $pidfile exists
      # if it does, see if process is running
      # if it doesn't, it's not running - or at least was not started by dsmserv.rc
      if [[ -e $pidfile ]]
      then
         progpid=`cat $pidfile`
         running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep`
         if [[ -n $running ]]
         then
            g_status="running"
         else
            g_status="stopped"
            # remove the pidfile if stopped.
            if [[ -e $pidfile ]]
            then
                rm $pidfile
                if [[ $? != 0 ]]
                then
                    echo "$prog instance $instance stopped, but unable to remove $pidfile"
                    echo "Be sure to remove $pidfile."
                fi
            fi
         fi
      else
        g_status="stopped"
      fi
      if [[ $1 == 1 ]]
      then
            echo "Status of $prog instance $instance: $g_status"
      fi

      if [ "${1}" = "1" ]
      then
        case ${g_status} in
          (stopped) EXIT=100 ;;
          (running) EXIT=110 ;;
        esac
        exit ${EXIT}
      fi
}

restart() {
        stop
        start
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  status)
        status 1
        ;;
  restart|reload)
        restart
        ;;
  *)
        echo $"Usage: $0 {start|stop|status|restart}"
        exit 1
esac

exit $?

… and the diff(1) between original and modified one.

[root@300 ~]# diff -u /etc/init.d/tsm0 /root/tsm0
--- /etc/init.d/tsm0    2016-07-13 13:20:43.000000000 +0200
+++ /root/tsm0          2016-07-13 13:27:41.000000000 +0200
@@ -207,7 +207,8 @@
       then
          echo "$0: Invalid call to status routine. Expected argument: "
          echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
-         exit 1
+         exit 100
+         # exit 1
       fi
       #see if file $pidfile exists
       # if it does, see if process is running
@@ -239,6 +240,15 @@
       then
             echo "Status of $prog instance $instance: $g_status"
       fi
+
+      if [ "${1}" = "1" ]
+      then
+        case ${g_status} in
+          (stopped) EXIT=100 ;;
+          (running) EXIT=110 ;;
+        esac
+        exit ${EXIT}
+      fi
 }

 restart() {

Copy tsm0 Profile to the Other Node

[root@300 ~]# pwd
/home
[root@300 /home]# tar -czf - tsm0 | ssh 301 'tar -C /home -xzf -'
[root@300 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@301 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0

IBM TSM Server Start

[root@300 ~]# hares -online TSM0_ip_bond0         -sys 300
[root@300 ~]# hares -online TSM0_mnt_active_log   -sys 300
[root@300 ~]# hares -online TSM0_mnt_archive_log  -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_01        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_02        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_03        -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_instance     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_01     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_02     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_03     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_04     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_05     -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_06     -sys 300
[root@300 ~]# hares -state | grep TSM0 | grep 300
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@300 ~]# cat >> /etc/services << __EOF
DB2_tsm0        60000/tcp
DB2_tsm0_1      60001/tcp
DB2_tsm0_2      60002/tcp
DB2_tsm0_3      60003/tcp
DB2_tsm0_4      60004/tcp
DB2_tsm0_END    60005/tcp
__EOF
[root@300 ~]# hagrp -freeze TSM0_site
[root@300 ~]# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  300            RUNNING              0
A  301            RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  NSR_site        300            Y          N               OFFLINE
B  NSR_site        301            Y          N               ONLINE
B  RMAN_site       300            Y          N               OFFLINE
B  RMAN_site       301            Y          N               ONLINE
B  TSM0_site       300            Y          N               PARTIAL
B  TSM0_site       301            Y          N               OFFLINE
B  VCS_site        300            Y          N               OFFLINE
B  VCS_site        301            Y          N               ONLINE

-- GROUPS FROZEN
-- Group

C  TSM0_site

-- RESOURCES DISABLED
-- Group           Type            Resource

H  TSM0_site      Application     TSM0_server
H  TSM0_site      DiskGroup       TSM0_dg
H  TSM0_site      IP              TSM0_ip_bond0
H  TSM0_site      Mount           TSM0_mnt_active_log
H  TSM0_site      Mount           TSM0_mnt_archive_log
H  TSM0_site      Mount           TSM0_mnt_db_01
H  TSM0_site      Mount           TSM0_mnt_db_02
H  TSM0_site      Mount           TSM0_mnt_db_03
H  TSM0_site      Mount           TSM0_mnt_db_backup_01
H  TSM0_site      Mount           TSM0_mnt_db_backup_02
H  TSM0_site      Mount           TSM0_mnt_db_backup_03
H  TSM0_site      Mount           TSM0_mnt_instance
H  TSM0_site      Mount           TSM0_mnt_pool0_01
H  TSM0_site      Mount           TSM0_mnt_pool0_02
H  TSM0_site      Mount           TSM0_mnt_pool0_03
H  TSM0_site      Mount           TSM0_mnt_pool0_04
H  TSM0_site      Mount           TSM0_mnt_pool0_05
H  TSM0_site      Mount           TSM0_mnt_pool0_06
H  TSM0_site      NIC             TSM0_nic_bond0

[root@300 ~]# su - tsm0 -c '/opt/tivoli/tsm/server/bin/dsmserv -i /tsm0'
ANR7800I DSMSERV generated at 16:39:04 on Jun  8 2016.

IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 9834.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message
catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1635I The server machine GUID, 54.80.e8.50.e4.48.e6.11.8e.6d.00.0a.f7.49.2b.08, has
initialized.
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2803I License manager started.
ANR8200I TCP/IP Version 4 driver ready for connection with clients on port 1500.
ANR9639W Unable to load Shared License File dsmreg.sl.
ANR9652I An EVALUATION LICENSE for IBM System Storage Archive Manager will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Basic Edition will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Extended Edition will expire on
08/13/2016.
ANR2828I Server is licensed to support IBM System Storage Archive Manager.
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.
ANR2560I Schedule manager started.
ANR0984I Process 1 for EXPIRE INVENTORY (Automatic) started in the BACKGROUND at 01:58:03 PM.
ANR0811I Inventory client file expiration started as process 1.
ANR0167I Inventory file expiration process 1 processed for 0 minutes.
ANR0812I Inventory file expiration process 1 completed: processed 0 nodes, examined 0 objects,
deleting 0 backup objects, 0 archive objects, 0 DB backup volumes, and 0 recovery plan files. 0
objects were retried and 0 errors were encountered.
ANR0985I Process 1 for EXPIRE INVENTORY (Automatic) running in the BACKGROUND completed with
completion state SUCCESS at 01:58:03 PM.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
TSM:TSM0>q admin
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY ADMIN

Administrator        Days Since       Days Since      Locked?       Privilege Classes
Name                Last Access     Password Set
--------------     ------------     ------------     ----------     -----------------------
ADMIN                        <1               <1         No         System
ADMIN_CENTER                 halt
ANR2017I Administrator SERVER_CONSOLE issued command: HALT
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.
ANR0991I Server shutdown complete.


[root@300 ~]# hagrp -unfreeze TSM0_site

[root@300 ~]# hares -state | grep TSM0 | grep 302
TSM0_dg               State                 300  ONLINE
TSM0_ip_bond0         State                 300  ONLINE
TSM0_mnt_active_log   State                 300  ONLINE
TSM0_mnt_archive_log  State                 300  ONLINE
TSM0_mnt_db_01        State                 300  ONLINE
TSM0_mnt_db_02        State                 300  ONLINE
TSM0_mnt_db_03        State                 300  ONLINE
TSM0_mnt_db_backup_01 State                 300  ONLINE
TSM0_mnt_db_backup_02 State                 300  ONLINE
TSM0_mnt_db_backup_03 State                 300  ONLINE
TSM0_mnt_instance     State                 300  ONLINE
TSM0_mnt_pool0_01     State                 300  ONLINE
TSM0_mnt_pool0_02     State                 300  ONLINE
TSM0_mnt_pool0_03     State                 300  ONLINE
TSM0_mnt_pool0_04     State                 300  ONLINE
TSM0_mnt_pool0_05     State                 300  ONLINE
TSM0_mnt_pool0_06     State                 300  ONLINE
TSM0_nic_bond0        State                 300  ONLINE
TSM0_server           State                 300  OFFLINE

[root@301 ~]# hares -online TSM0_server -sys 300

Ignore these errors below during first IBM TSM server startup.

IGNORE | ERRORS TO IGNORE DURING FIRST IBM TSM SERVER START
IGNORE | 
IGNORE | DBI1306N  The instance profile is not defined.
IGNORE |
IGNORE | Explanation:
IGNORE |
IGNORE | The instance is not defined in the target machine registry.
IGNORE |
IGNORE | User response:
IGNORE |
IGNORE | Specify an existing instance name or create the required instance.

Install IBM TSM Server Licenses

Screenshots from that process below.

ibm-tsm-install-license-01

ibm-tsm-install-license-02

ibm-tsm-install-license-03

ibm-tsm-install-license-04

Lets now register licenses for the IBM TSM.

tsm: TSM0_SITE>register license file=/opt/tivoli/tsm/server/bin/tsmee.lic
ANR2852I Current license information:
ANR2853I New license information:
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.

IBM TSM Client Configuration on the IBM TSM Server Nodes

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

[root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF

Install lin_tape on IBM TSM Server

[root@ALL]# uname -r
2.6.32-504.el6.x86_64

[root@ALL]# uname -r | sed 's|.x86_64||g'
2.6.32-504.el6

[root@ALL]# yum --showduplicates list kernel-devel | grep 2.6.32-504.el6
kernel-devel.x86_64            2.6.32-504.el6                 rhel-6-server-rpms

[root@ALL]# yum install rpm-build kernel-devel-2.6.32-504.el6

[root@ALL]# rpm -Uvh /root/rpmbuild/RPMS/x86_64/lin_tape-3.0.10-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_tape               ########################################### [100%]
Starting lin_tape...
lin_tape loaded

[root@ALL]# rpm -Uvh lin_taped-3.0.10-rhel6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:lin_taped              ########################################### [100%]
Starting lin_tape...
lin_taped loaded

[root@ALL]# /etc/init.d/lin_tape start
Starting lin_tape... lin_taped already running. Abort!

[root@ALL]# /etc/init.d/lin_tape restart
Shutting down lin_tape... lin_taped unloaded
Starting lin_tape...

Library Configuration

This is quite unusual configuration as the IBM TS3310 library with 4 LTO4 drives are logically partitioned into two logical libraries with 2 drives dedicated to Dell/EMC Networker and 2 drives dedicated to the IBM TSM server. Such library is shown below.

ibm-tsm-ts3310.jpg

The changers and tape drives for each backup system.

Networker | (L) 000001317577_LLA changer0
TSM       | (L) 000001317577_LLB changer1_persistent_TSM0
Networker | (1) 7310132058       tape0
Networker | (2) 7310295146       tape1
TSM       | (3) 7310214751       tape2_persistent_TSM0
TSM       | (4) 7310214904       tape3_persistent_TSM0
[root@300 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape3
/dev/IBMtape3n

We will use UDEV for persistent configuration.

[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape0)    | grep -i serial
    ATTR{serial_num}=="7310132058"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape1)    | grep -i serial
    ATTR{serial_num}=="7310295146"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape2)    | grep -i serial
    ATTR{serial_num}=="7310214751"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape3)    | grep -i serial
    ATTR{serial_num}=="7310214904"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger0) | grep -i serial
    ATTR{serial_num}=="000001317577_LLA"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger1) | grep -i serial
    ATTR{serial_num}=="000001317577_LLB"
[root@300 ~]# cat /proc/scsi/IBM*
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Changer Devices:
Number  model       SN                HBA             SCSI            FO Path
0       3576-MTL    000001317577_LLA  qla2xxx         2:0:1:1         NA
1       3576-MTL    000001317577_LLB  qla2xxx         4:0:1:1         NA
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Tape Devices:
Number  model       SN                HBA             SCSI            FO Path
0       ULT3580-TD4 7310132058        qla2xxx         2:0:0:0         NA
1       ULT3580-TD4 7310295146        qla2xxx         2:0:1:0         NA
2       ULT3580-TD4 7310214751        qla2xxx         4:0:0:0         NA
3       ULT3580-TD4 7310214904        qla2xxx         4:0:1:0         NA

[root@300 ~]# cat /etc/udev/rules.d/98-lin_tape.rules
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310132058", MODE="0660", SYMLINK="IBMtape0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310295146", MODE="0660", SYMLINK="IBMtape1"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214751", MODE="0660", SYMLINK="IBMtape2_persistent_TSM0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214904", MODE="0660", SYMLINK="IBMtape3_persistent_TSM0"
KERNEL=="IBMchanger*", ATTR{serial_num}=="000001317577_LLB", MODE="0660", SYMLINK="IBMchanger1_persistent_TSM0"

[root@301 ~]# /etc/init.d/lin_tape stop
Shutting down lin_tape... lin_taped unloaded

[root@301 ~]# rmmod lin_tape

[root@301 ~]# /etc/init.d/lin_tape start
Starting lin_tape...

New persistent devices.

[root@301 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMchanger1_persistent_TSM0
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape2_persistent_TSM0
/dev/IBMtape3
/dev/IBMtape3n
/dev/IBMtape3_persistent_TSM0

Lets update the paths to the tape drives now.

tsm: TSM0_SITE>query path f=d

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: TS3310
              Destination Type: LIBRARY
                       Library:
                     Node Name:
                        Device: /dev/IBMchanger0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): ADMIN
         Last Update Date/Time: 09/16/2014 13:36:14

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE0
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape0
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 14:02:02

                   Source Name: TSM0_SITE
                   Source Type: SERVER
              Destination Name: DRIVE1
              Destination Type: DRIVE
                       Library: TS3310
                     Node Name:
                        Device: /dev/IBMtape1
              External Manager:
              ZOS Media Server:
                  Comm. Method:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE
         Last Update Date/Time: 07/14/2016 13:59:48

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=no
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary device=/dev/IBMchanger1_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=yes
ANR1722I A path from TSM0_SITE to TS3310 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=no
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE1         online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape2_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.

tsm: TSM0_SITE>update drive TS3310           DRIVE0           SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE0 in library TS3310 updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape3_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated.

tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated.


Lets verify that our library works properly.

tsm: TSM0_SITE>audit library TS3310 checklabel=barcode
ANS8003I Process number 2 started.

tsm: TSM0_SITE>query proc

Process      Process Description      Process Status
  Number
--------     --------------------     -------------------------------------------------
       2     AUDIT LIBRARY            ANR8459I Auditing volume inventory for library
                                       TS3310.


tsm: TSM0_SITE>query act
(...)

08/04/2016 14:30:41      ANR2017I Administrator ADMIN issued command: AUDIT
                          LIBRARY TS3310 checklabel=barcode  (SESSION: 8)
08/04/2016 14:30:41      ANR0984I Process 2 for AUDIT LIBRARY started in the
                          BACKGROUND at 02:30:41 PM. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:41      ANR8457I AUDIT LIBRARY: Operation for library TS3310
                          started as process 2. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:46      ANR8358E Audit operation is required for library TS3310.
                          (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:51      ANR8439I SCSI library TS3310 is ready for operations.
                          (SESSION: 8, PROCESS: 2)

(...)

08/04/2016 14:31:26      ANR0985I Process 2 for AUDIT LIBRARY running in the
                          BACKGROUND completed with completion state SUCCESS at
                          02:31:26 PM. (SESSION: 8, PROCESS: 2)

(...)

IBM TSM Storage Pool Configuration

IBM TSM container storage pool creation.

tsm: TSM0_SITE>define stgpool POOL0_stgFC stgtype=directory
ANR2249I Storage pool POOL0_stgFC is defined.

tsm: TSM0_SITE>define stgpooldirectory POOL0_stgFC /tsm0/pool0/pool0_01,/tsm0/pool0/pool0_02,/tsm0/pool0/pool0_03,/tsm0/pool0/pool0_04,/tsm0/pool0/pool0_05,/tsm0/pool0/pool0_06
ANR3254I Storage pool directory /tsm0/pool0/pool0_01 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_02 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_03 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_04 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_05 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_06 was defined in storage pool POOL0_stgFC.

tsm: TSM0_SITE>q stgpooldirectory

Storage Pool Name     Directory                                         Access
-----------------     ---------------------------------------------     ------------
POOL0_stgFC           /tsm0/pool0/pool0_01                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_02                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_03                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_04                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_05                              Read/Write
POOL0_stgFC           /tsm0/pool0/pool0_06                              Read/Write


IBM TSM Backup Policies Configuration

Below is an example policy.

tsm: TSM0_SITE>def dom  FS backret=30 archret=30
ANR1500I Policy domain FS defined.

tsm: TSM0_SITE>def pol  FS FS
ANR1510I Policy set FS defined in policy domain FS.

tsm: TSM0_SITE>def mg   FS FS FS_1DAY
ANR1520I Management class FS_1DAY defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1DAY   STANDARD type=backup destination=POOL0_STGFC verexists=32 verdeleted=1 retextra=31 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1DAY.

tsm: TSM0_SITE>def mg   FS FS FS_1MONTH
ANR1520I Management class FS_1MONTH defined in policy domain FS, set FS.

tsm: TSM0_SITE>def co   FS FS FS_1MONTH STANDARD type=backup destination=POOL0_STGFC  verexists=4 verdeleted=1 retextra=91 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1MONTH.

tsm: TSM0_SITE>as defmg FS FS FS_1DAY
ANR1538I Default management class set to FS_1DAY for policy domain FS, set FS.

tsm: TSM0_SITE>act pol  FS FS
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.

Do you wish to proceed? (Yes (Y)/No (N)) y
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup:  files will not be archived by default if this set is activated.
ANR1514I Policy set FS activated in policy domain FS.



I hope that the amount of instructions did not discouraged you from one of the best enterprise backup systems – the IBM TSM (now IBM Spectrum Protect) and on of the best high availability cluster – the Veritas Cluster Server πŸ™‚

EOF

Syncthing on FreeBSD

This article will show you how to setup Syncthing on FreeBSD system.

syncthing-logo.png

One warning at the beginning – all > and < characters in the Syncthing configuration file were changed to } and { respectively. This is because of WordPress limitation. Remember that Syncthing config is XML file.

For most of my personal backup needs I always use rsync(1) but on the limited devices such as phones or tablets its real PITA. Thus for the automated import of the photos and other files from such devices I prefer to use Syncthing tool.

If you haven’t heard about it yet I will cite the Syncthing https://syncthing.net/ site. “Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it’s transmitted over the Internet.” … and Wikipedia “Syncthing is a free, open-source peer-to-peer file synchronization application available for Windows, Mac, Linux, Android, Solaris, Darwin, and BSD. It can sync files between devices on a local network, or between remote devices over the Internet. Data security and data safety are built into the design of the software.”

One may ask how its different from Nextcloud for example. Well, with Nextcloud you have almost ‘entire’ cloud stack with custom applications at your disposal. With Syncthing you have synchronization tool between devices and nothing more.

Initially I wanted – similarly like with Nextcloud on FreeBSD – to setup everything in a FreeBSD Jail. The problem is Syncthing does not work in a FreeBSD Jails virtualization as I figured out after several hours of trying to find out what is wrong. The management interface of Syncthing was working as expected and was accessible but the Syncthing on the Android mobile phone was not able to connect/sync with the Syncthing instance in the FreeBSD Jail. Sure I could connect to the Syncthing management interface from the phone but still could not do any backup using Syncthing protocol. Knowing this limitation you have 3 options to choose from:

  • Setup Syncthing on FreeBSD host like any other service.
  • Use FreeBSD Bhyve virtualization for Syncthing instance.
  • Use VirtualBox package/port for Syncthing instance.

I have chosen the first option. It is actually the same for Bhyve and VirtualBox but additional work is needed with virtualization layer. I will use Android based mobile phone as an example for the Syncthing client but you can sync data between computers as well.

One more thing, there is no such thing as Syncthing server and Syncthing client. All Syncthing instances/installations are the same, You can just add/remove devices and directories to synchronize between those devices. I used term ‘client’ above to show that I will be automating of copying the files from phone to FreeBSD server with Syncthing instance, nothing more.

Host

Here are some basic steps that I have done on the FreeBSD host. Things like aliases database, timezone, DNS and basic FreeBSD settings at its /etc/rc.conf core file.

# newaliases -v
/etc/mail/aliases: 29 aliases, longest 10 bytes, 297 bytes total

# ln -s /usr/share/zoneinfo/Europe/Warsaw /etc/localtime

# date
Fri Aug 17 22:05:18 CEST 2018

# echo nameserver 1.1.1.1 > /etc/resolv.conf

# ping -c 3 freebsd.org
PING freebsd.org (96.47.72.84): 56 data bytes
64 bytes from 96.47.72.84: icmp_seq=0 ttl=51 time=117.918 ms
64 bytes from 96.47.72.84: icmp_seq=1 ttl=51 time=115.169 ms
64 bytes from 96.47.72.84: icmp_seq=2 ttl=51 time=115.392 ms

--- freebsd.org ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 115.169/116.160/117.918/1.247 ms

… and the main FreeBSD configuration file.

# cat /etc/rc.conf
# NETWORK
  hostname=blackbox.local
  ifconfig_re0="inet 10.0.0.100/24 up"
  defaultrouter="10.0.0.1"

# DAEMONS | YES
  zfs_enable=YES
  sshd_enable=YES
  ntpd_enable=YES
  syncthing_enable=YES
  syslogd_flags="-s -s"

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  dumpdev=NO
  update_motd=NO
  virecover_enable=NO
  clear_tmp_enable=YES

Install

First we will switch from quarterly to the latest pkg(8) branch to get the most up to date packages.

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",

# sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

# grep url: /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

We will now bootstrap pkg(8) and then update its database to latest available one.

# env ASSUME_ALWAYS_YES=yes pkg update -f
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[syncthing.local] Installing pkg-1.10.5_1...
[syncthing.local] Extracting pkg-1.10.5_1: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[syncthing.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[syncthing.local] Fetching packagesite.txz: 100%    6 MiB 352.7kB/s    00:19    
Processing entries: 100%
FreeBSD repository update completed. 32388 packages processed.
All repositories are up to date.

… and then install the Syncthing from pkg(8) packages.

# pkg install -y syncthing 
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        syncthing: 0.14.48

Number of packages to be installed: 1

The process will require 88 MiB more space.
15 MiB to be downloaded.
[1/1] Fetching syncthing-0.14.48.txz: 100%   15 MiB 525.3kB/s    00:29    
Checking integrity... done (0 conflicting)
[1/1] Installing syncthing-0.14.48...
===> Creating groups.
Creating group 'syncthing' with gid '983'.
===> Creating users
Creating user 'syncthing' with uid '983'.
[1/1] Extracting syncthing-0.14.48: 100%
Message from syncthing-0.14.48:

WARNING: This version is not backwards compatible with 0.13.x, 0.12.x, 0.11.x
nor 0.10.x releases!

For more information, please read:

https://forum.syncthing.net/t/syncthing-v0-14-0/7806
https://github.com/syncthing/syncthing/releases/tag/v0.13.0
https://forum.syncthing.net/t/syncthing-v0-11-0-release-notes/2426
https://forum.syncthing.net/t/syncthing-syncthing-v0-12-0-beryllium-bedbug/6026

The Syncthing package created a syncthing user and group for us.

# id syncthing
uid=983(syncthing) gid=983(syncthing) groups=983(syncthing)

Look how small the Syncthing is, these are all files installed by the net/syncthing package.

# pkg info -l syncthing
syncthing-0.14.48:
        /usr/local/bin/stbench
        /usr/local/bin/stcli
        /usr/local/bin/stcompdirs
        /usr/local/bin/stdisco
        /usr/local/bin/stdiscosrv
        /usr/local/bin/stevents
        /usr/local/bin/stfileinfo
        /usr/local/bin/stfinddevice
        /usr/local/bin/stgenfiles
        /usr/local/bin/stindex
        /usr/local/bin/strelaypoolsrv
        /usr/local/bin/strelaysrv
        /usr/local/bin/stsigtool
        /usr/local/bin/sttestutil
        /usr/local/bin/stvanity
        /usr/local/bin/stwatchfile
        /usr/local/bin/syncthing
        /usr/local/etc/rc.d/syncthing
        /usr/local/etc/rc.d/syncthing-discosrv
        /usr/local/etc/rc.d/syncthing-relaypoolsrv
        /usr/local/etc/rc.d/syncthing-relaysrv
        /usr/local/share/doc/syncthing/AUTHORS
        /usr/local/share/doc/syncthing/LICENSE
        /usr/local/share/doc/syncthing/README.md

Configuration

As shows above we already have syncthing_enable=YES added to the /etc/rc.conf file.

# /usr/local/etc/rc.d/syncthing rcvar
# syncthing
#
syncthing_enable="NO"
#   (default: "")

# grep syncthing_enable /etc/rc.conf
  syncthing_enable=YES

Also from the Syncthing rc(8) startup script you may check other startup options.

# less -N /usr/local/etc/rc.d/syncthing
(...)
      9 # Add the following lines to /etc/rc.conf.local or /etc/rc.conf
     10 # to enable this service:
     11 #
     12 # syncthing_enable (bool):      Set to NO by default.
     13 #                               Set it to YES to enable syncthing.
     14 # syncthing_home (path):        Directory where syncthing configuration
     15 #                               data is stored.
     16 #                               Default: /usr/local/etc/syncthing
     17 # syncthing_log_file (path):    Syncthing log file
     18 #                               Default: /var/log/syncthing.log
     19 # syncthing_user (user):        Set user to run syncthing.
     20 #                               Default is "syncthing".
     21 # syncthing_group (group):      Set group to run syncthing.
     22 #                               Default is "syncthing".
(...)

The Syncthing needs /var/log/syncthing.log log file. Lets then create it and set proper owner and rights for it.

# ls /var/log/syncthing.log
ls: /var/log/syncthing.log: No such file or directory

# :> /var/log/syncthing.log

# chown syncthing:syncthing /var/log/syncthing.log

# ls -l /var/log/syncthing.log
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:06 /var/log/syncthing.log

As we will be using this log file we also need to take care of its rotation, we will use builtin FreeBSD newsyslog(8) daemon for that purpose.

# cat > /etc/newsyslog.conf.d/syncthing << __EOF
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC
__EOF

# cat /etc/newsyslog.conf.d/syncthing
# logfilename              [owner:group]     mode  count  size  when  flags [/pid_file]
/var/log/syncthing.log  syncthing:syncthing  640   7      100   *     JC

# newsyslog -v | grep syncthing
Processing /etc/newsyslog.conf.d/syncthing
/var/log/syncthing.log : size (Kb): 0 [100] --> skipping

Lets try to start Syncthing for the first time.

# service syncthing start
Starting syncthing.
daemon: pidfile ``/var/run/syncthing.pid'': Permission denied
/usr/local/etc/rc.d/syncthing: WARNING: failed to start syncthing

Seems that rc(8) Syncthing startup does not create PID file automatically, lets create it then.

 
# :> /var/run/syncthing.pid

# chown syncthing:syncthing /var/run/syncthing.pid

# ls -l /var/run/syncthing.pid
-rwxr-xr-x  1 syncthing  syncthing  0 2018.08.19 01:08 /var/run/syncthing.pid

Now lets try to start Syncthing again.

# service syncthing start
Starting syncthing.

Better. Lets see what ports does it use.

# sockstat -l -4 | grep syncthing
syncthing syncthing 27499 9  tcp46  *:22000               *:*
syncthing syncthing 27499 10 udp4   *:18876               *:*
syncthing syncthing 27499 13 udp4   *:21027               *:*
syncthing syncthing 27499 20 tcp4   127.0.0.1:8384        *:*

… and check its log file.

# cat /var/log/syncthing.log
[start] 01:08:40 INFO: Generating ECDSA key and certificate for syncthing...
[MPN4S] 01:08:40 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:08:40 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:08:41 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:08:41 INFO: Default folder created and/or linked to new config
[MPN4S] 01:08:41 INFO: Default config saved. Edit /usr/local/etc/syncthing/config.xml to taste or use the GUI
[MPN4S] 01:08:42 INFO: Hashing performance is 112.85 MB/s
[MPN4S] 01:08:42 INFO: Updating database schema version from 0 to 2...
[MPN4S] 01:08:42 INFO: Updated symlink type for 0 index entries and added 0 invalid files to global list
[MPN4S] 01:08:42 INFO: Finished updating database schema version from 0 to 2
[MPN4S] 01:08:42 INFO: No stored folder metadata for "default": recalculating
[MPN4S] 01:08:42 WARNING: Creating directory for "Default Folder" (default): mkdir /Sync/: permission denied
[MPN4S] 01:08:42 WARNING: Creating folder marker: folder path missing
[MPN4S] 01:08:42 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:08:42 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:08:42 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v4.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery-v6.syncthing.net/v2/?nolookup&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: Using discovery server https://discovery.syncthing.net/v2/?noannounce&id=LYXKCHX-VI3NYZR-ALCJBHF-WMZYSPK-QG6QJA3-MPFYMSO-U56GTUK-NA2MIAW
[MPN4S] 01:08:42 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:08:42 INFO: Relay listener (dynamic+https://relays.syncthing.net/endpoint) starting
[MPN4S] 01:08:42 WARNING: Error on folder "Default Folder" (default): folder path missing
[MPN4S] 01:08:42 INFO: Failed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:08:42 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:08:42 INFO: Loading HTTPS certificate: open /usr/local/etc/syncthing/https-cert.pem: no such file or directory
[MPN4S] 01:08:42 INFO: Creating new HTTPS certificate
[MPN4S] 01:08:42 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:08:42 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/
[MPN4S] 01:08:55 INFO: Joined relay relay://11.12.13.14:443
[MPN4S] 01:09:02 INFO: Detected 1 NAT service

We have several WARNING messages here about default /Sync directory. Lets fix those.

# service syncthing stop
Stopping syncthing.
Waiting for PIDS: 27498.

Upon first Syncthing start the rc(8) startup script created the /usr/local/etc/syncthing directory with its configuration.

# find /usr/local/etc/syncthing
/usr/local/etc/syncthing
/usr/local/etc/syncthing/https-cert.pem
/usr/local/etc/syncthing/https-key.pem
/usr/local/etc/syncthing/cert.pem
/usr/local/etc/syncthing/key.pem
/usr/local/etc/syncthing/config.xml
/usr/local/etc/syncthing/index-v0.14.0.db
/usr/local/etc/syncthing/index-v0.14.0.db/MANIFEST-000000
/usr/local/etc/syncthing/index-v0.14.0.db/LOCK
/usr/local/etc/syncthing/index-v0.14.0.db/000001.log
/usr/local/etc/syncthing/index-v0.14.0.db/LOG
/usr/local/etc/syncthing/index-v0.14.0.db/CURRENT

Now lets get back to fixing the WARNING for the /Sync directory.

# grep '/Sync' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="//Sync" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

# ls /Sync
ls: /Sync: No such file or directory

Now lets create dedicated directory for our Syncthing instance and set it also in the /usr/local/etc/syncthing/config.xml config file.

# mkdir /syncthing

# chown syncthing:syncthing /syncthing

# chmod 750 /syncthing

# vi /usr/local/etc/syncthing/config.xml

# grep '/syncthing' /usr/local/etc/syncthing/config.xml
    {folder id="default" label="Default Folder" path="/syncthing" type="readwrite" rescanIntervalS="3600" fsWatcherEnabled="true" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"}

We will also disable Relay and Global Announce Server but we will left Local Announce Server enabled.

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}true{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# vi /usr/local/etc/syncthing/config.xml

# grep -i relay /usr/local/etc/syncthing/config.xml
        {relaysEnabled}false{/relaysEnabled}
        {relayReconnectIntervalM}10{/relayReconnectIntervalM}

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}true{/globalAnnounceEnabled}

# vi /usr/local/etc/syncthing/config.xml

# grep globalAnnounce /usr/local/etc/syncthing/config.xml
        {globalAnnounceServer}default{/globalAnnounceServer}
        {globalAnnounceEnabled}false{/globalAnnounceEnabled}

Before restarting Syncthing lets clean the /var/log/syncthing.log file to eliminate now unneeded information.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

Lets check what the log holds for us now.

# cat /var/log/syncthing.log
[MPN4S] 01:13:38 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:13:38 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:13:39 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:13:40 INFO: Hashing performance is 112.97 MB/s
[MPN4S] 01:13:40 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:13:40 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:13:40 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:13:40 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:13:40 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:13:40 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:13:40 INFO: GUI and API listening on 127.0.0.1:8384
[MPN4S] 01:13:40 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/

We can see that the management interface listens on HTTP not HTTPS because tls option is set to false. We will also need to switch the management interface address from localhost (127.0.0.1) to our IP address (10.0.0.100).

# grep -B 1 -A 3 127.0.0.1 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="false" debugging="false"}
        {address}127.0.0.1:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

# vi /usr/local/etc/syncthing/config.xml

# grep -B 1 -A 3 10.0.0.100 /usr/local/etc/syncthing/config.xml
    {gui enabled="true" tls="true" debugging="false"}
        {address}10.0.0.100:8384{/address}
        {apikey}2jU5aR4zTJLGdEuSLLmdRGgfCgJaUpUv{/apikey}
        {theme}default{/theme}
    {/gui}

Lets verify our changes now.

# service syncthing stop
Stopping syncthing.

# :> /var/log/syncthing.log

# service syncthing start
Starting syncthing.

# cat /var/log/syncthing.log
[MPN4S] 01:16:20 INFO: syncthing v0.14.48 "Dysprosium Dragonfly" (go1.10.3 freebsd-amd64) root@111amd64-default-job-12 2018-08-08 09:19:19 UTC [noupgrade]
[MPN4S] 01:16:20 INFO: My ID: MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO
[MPN4S] 01:16:21 INFO: Single thread SHA256 performance is 131 MB/s using minio/sha256-simd (89 MB/s using crypto/sha256).
[MPN4S] 01:16:22 INFO: Hashing performance is 113.07 MB/s
[MPN4S] 01:16:22 INFO: Ready to synchronize "Default Folder" (default) (readwrite)
[MPN4S] 01:16:22 INFO: Overall send rate is unlimited, receive rate is unlimited
[MPN4S] 01:16:22 INFO: Rate limits do not apply to LAN connections
[MPN4S] 01:16:22 INFO: TCP listener ([::]:22000) starting
[MPN4S] 01:16:22 INFO: Completed initial scan of readwrite folder "Default Folder" (default)
[MPN4S] 01:16:22 INFO: Device MPN4S65-UQWC5SP-3LR2XDB-T5JNYET-VQEQC3X-DSAUI27-BQQKZQE-BWQ3NAO is "blackbox.local" at [dynamic]
[MPN4S] 01:16:22 INFO: GUI and API listening on 10.0.0.100:8384
[MPN4S] 01:16:22 INFO: Access the GUI via the following URL: https://10.0.0.100:8384/
[MPN4S] 01:16:42 INFO: Detected 1 NAT service

The log is now ‘clean’ and we can continue to the browser at the https://10.0.0.100:8384 management interface for the rest of Syncthing configuration. The browser will of course warn us about untrusted HTTPS certificate.

syncthing-01.png

Syncthing will ask us if we agree upon sharing of statistics data. I leave that choice to you.

syncthing-02.png

The Syncthing dashboard welcomes us with big red warning about remote administration being allowed without a password. We will fix that in a moment, click the Settings button in that warning.

syncthing-03

Leave first General tab will unmodified.

syncthing-04.png

On the GUI tab we will create user admin with SYNCTHINGPASSWORD password for the Syncthing management interface. Use something more sensible here πŸ™‚

syncthing-05.png

I did not modified settings at the Connections tab. Click Save to continue.

syncthing-06.png

Besides setting the user and its password I haven’t changed/set any other options.

We now has Syncthing without errors. You will be prompted for that user and password in a moment. We will now remove Default Folder as its not needed. Hit its Edit button.

syncthing-07.png

Then click the Remove button on the bottom.

syncthing-08.png

… and click Yes for confirmation.

syncthing-09.png

The ’empty’ Syncthing dashboard.

syncthing-10.png

Next we will download, install and configure Syncthing on the Android phone. Depending on your preferences use F-Droid repository or Google Play repository … or just an APK file from the source of your choice. The installed Syncthing application is shown below. Takes about 50 MB.

syncthing-11

Lets start it then, you will see the Welcome message from the Syncthing application.

syncthing-12

Depending on your Android version your phone may ask you to allow Syncthing for various permissions. Agree.

syncthing-13

Same as earlier the Syncthing will ask you if you agree for sharing of the statistics data. I also leave that choice to you.

syncthing-14

The Syncthing will now require restart, tap RESTART NOW to continue.

syncthing-15

By default the Camera directory is preconfigured pointing at /storage/emulated/0/DCIM directory which holds photos and screenshots taken on the phone. Its enough for me so I will use it. Tap the Syncthing hamburger menu button.

syncthing-19

… and select Web GUI option.

syncthing-20

You will see management interface for Syncthing on your Android phone, scroll below to add blackbox.local Syncthing instance from the FreeBSD in the Remote Devices section.

syncthing-21

Now in the Remote Devices section hit the Add Remote Device button.

syncthing-22

Remember that Local Announce service we left enabled? This is when it comes handy. You will have our Syncthing instance ID from FreeBSD displayed as it was automatically detected on the network.

syncthing-23

Click on the displayed ID and enter the blackbox.local hostname.

Besides entering (clicking) ID and hostname I did not set any other options. Click Save.

syncthing-24

The blackbox.local will be added to the Remote Devices list.

syncthing-25

Below are the Camera directory properties. Remember to select blackbox.local as the allowed host (small yellow slider).

syncthing-26

… and the blackbox.local device properties.

syncthing-27

Now let’s get back to the FreeBSD’s Syncthing instance management interface on the browser. You will be prompted to add Syncthing of the Android phone – SM-A320FL in my case – to the devices. Hit green Add Device button.

syncthing-28.png

Click Save without adding other options.

syncthing-29.png

The SM-A320FL device for our Android phone is now visible in the Remote Devices section.

syncthing-30.png

You should now be prompted that SM-A320FL device wants to share Camera directory. Hit green Add button.

syncthing-31.png

Enter SM-A320FL as the folder label and /syncthing/SM-A320FL as the directory name on the FreeBSD Syncthing instance. Also make sure that SM-A320FL is selected in the Share With Devices section on the bottom.

syncthing-32.png

The SM-A320FL device and SM-A320FL folder from this device are now configured. You will first see Out of Sync message for the SM-A320FL folder. The synchronization should now start whose progress can be observed both on the phone and in the management interface of the FreeBSD Syncthing instance in the browser.

syncthing-33.png

The SM-A320FL folder switched status to Syncing with progress.

syncthing-34.png

You will see similar status on the Android phone.

syncthing-36

After some file you will see that SM-A320FL folder has status Up to Date. That means that all files from the Camera directory are synchronized to the FreeBSD Syncthing instance.

syncthing-35

The created/synced directories from the Android phone looks as follows on the FreeBSD Syncthing instance.

# find /syncthing -type d
/syncthing
/syncthing/SM-A320FL
/syncthing/SM-A320FL/Camera
/syncthing/SM-A320FL/Camera/.AutoPortrait
/syncthing/SM-A320FL/Screenshots
/syncthing/SM-A320FL/.thumbnails
/syncthing/SM-A320FL/.stfolder

Now you have your Camera files synced as backup.

The complete Syncthing config from the FreeBSD instance is available /usr/local/etc/syncthing/config.xml here. After download rename it from *.xml.key to *.xml file (WordPress limitation).

UPDATE 1

The Syncthing on FreeBSD article was featured in the BSD Now 262 – OpenBSD Surfacing episode.

Thanks for mentioning!

EOF