Tag Archives: pkg

Nextcloud 17 on FreeBSD 12.1

Not so long ago – almost 2 years from now – I wrote about setting up Nextcloud 13 on FreeBSD.

Today Nextcloud is at 17 version and the configuration that worked two years ago requires some tweaks.

nextcloud-logo.png

This guide will not cover the same information that is available in earlier Nextcloud 13 on FreeBSD article like settings to run Nextcloud inside FreeBSD Jail. Please refer to that earpier article for these settings.

Today we will use these as backends for Nextcloud 17.

  • PostgreSQL 12
  • PHP 7.3
  • Nginx 1.14 (with php-fpm)
  • Memcached 1.5.19

As Nextcloud in FreeBSD packages comes with MySQL and without PostgreSQL support we will need to build it from source using FreeBSD Ports.

Settings

Let’s fetch the latest FreeBSD Ports tree.

# rm -r /var/db/portsnap
# mkdir /var/db/portsnap
# portsnap auto

Now we need to configure needed options in the /etc/make.conf file.

# cat /etc/make.conf
WRKDIRPREFIX=${PORTSDIR}/obj
DEFAULT_VERSIONS+= php=7.3
DEFAULT_VERSIONS+= pgsql=12
OPTIONS_UNSET+=    MYSQL
OPTIONS_SET+=      PGSQL
OPTIONS_SET+=      IMAGICK
OPTIONS_SET+=      PCNTL
OPTIONS_SET+=      SMB
OPTIONS_SET+=      REDIS


Packages and Ports

First we will add some basic tools and things like PostgreSQL still using FreeBSD packages to save tome time instead of compiling them.

# pkg install \
    sudo \
    portmaster \
    beadm \
    lsblk \
    postgresql12-client \
    postgresql12-server \
    nginx \
    memcached \
    php73-pecl-memcached


Now we will compile Nextcloud and its dependencies using FreeBSD Ports – but with portmaster.

# env BATCH=yes portmaster \
    databases/php73-pdo_pgsql \
    databases/php73-pgsql \
    www/nextcloud 

PostgreSQL

We will now configure the FreeBSD’s Login Class for PostgreSQL database in the /etc/login.conf file.

# cat  /etc/login.conf

postgres:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

EOF

# cap_mkdb /etc/login.conf

… and PostgreSQL settings in main FreeBSD’s configuration /etc/rc.conf file.

# grep postgresql /etc/rc.conf
postgresql_enable=YES
postgresql_class=postgres
postgresql_data=/var/db/postgres/data12

Let’s initialize the PostgreSQL database.

# /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/db/postgres/data12 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Europe/Warsaw
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /var/db/postgres/data12 -l logfile start


As PostgreSQL database uses 8k blocks let’s set it in ZFS. We could of course create dedicated dataset for this purpose if needed.

# zfs set recordsize=8k zroot/ROOT/default

Now, let’s start the PostgreSQL database.

# /usr/local/etc/rc.d/postgresql start
2019-12-31 11:47:04.918 CET [36089] LOG:  starting PostgreSQL 12.1 on amd64-portbld-freebsd12.0, compiled by FreeBSD clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM 6.0.1), 64-bit
2019-12-31 11:47:04.918 CET [36089] LOG:  listening on IPv6 address "::1", port 5432
2019-12-31 11:47:04.918 CET [36089] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2019-12-31 11:47:04.919 CET [36089] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-12-31 11:47:04.928 CET [36089] LOG:  ending log output to stderr
2019-12-31 11:47:04.928 CET [36089] HINT:  Future log output will go to log destination "syslog".

We will now create PostgreSQL database for our Nextcloud instance.

# psql -hlocalhost -Upostgres
psql (12.1)
Type "help" for help.

postgres=# CREATE USER nextcloud WITH PASSWORD 'NEXTCLOUD_DB_PASSWORD';
CREATE ROLE
postgres=# CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
CREATE DATABASE
postgres=# ALTER DATABASE nextcloud OWNER TO nextcloud;
ALTER DATABASE
postgres=# \q

Keep in mind to put something more sophisticated in the NEXTCLOUD_DB_PASSWORD place.

PostgreSQL Cleanup and Indexing Script

Lets automate some PostgreSQL housekeeping.

# mkdir -p /var/db/postgres/bin
# chown postgres /var/db/postgres/bin
# vi /var/db/postgres/bin/vacuum.sh

#! /bin/sh

/usr/local/bin/vacuumdb -az 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s 1> /dev/null 2> /dev/null
:wq

# cat /var/db/postgres/bin/vacuum.sh
#! /bin/sh

/usr/local/bin/vacuumdb -az 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s 1> /dev/null 2> /dev/null

# chown postgres /var/db/postgres/bin/vacuum.sh
# chmod +x /var/db/postgres/bin/vacuum.sh

# su - postgres -c 'crontab -e'
0 0 * * * /var/db/postgres/bin/vacuum.sh
:wq
/tmp/crontab.JMg5BfT5HV: 2 lines, 42 characters.
crontab: installing new crontab

# su - postgres -c 'crontab -l'
0 0 * * * /var/db/postgres/bin/vacuum.sh

# su - postgres -c '/var/db/postgres/bin/vacuum.sh'

Nginx

Now its time for Nginx webserver.

# chown -R www:www /var/log/nginx

# ls -l /var/log/nginx
total 3
-rw-r-----  1 www  www   64 2019.12.31 00:00 access.log
-rw-r-----  1 www  www  133 2019.12.31 00:00 access.log.0.bz2
-rw-r-----  1 www  www   64 2019.12.31 00:00 error.log
-rw-r-----  1 www  www  133 2019.12.31 00:00 error.log.0.bz2

… and its main nginx.conf configuration file.

# cat /usr/local/etc/nginx/nginx.conf
user www;
worker_processes 4;
worker_rlimit_nofile 51200;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include mime.types;
  default_type application/octet-stream;
  log_format main '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log /var/log/nginx/access.log main;
  sendfile on;
  keepalive_timeout 65;

  upstream php-handler {
    server 127.0.0.1:9000;
  }

  server {
    # ENFORCE HTTPS
    listen 80;
    server_name nextcloud.domain.com;
    return 301 https://$server_name$request_uri;
  }

  server {
    listen 443 ssl http2;
    server_name nextcloud.domain.com;
    ssl_certificate /usr/local/etc/nginx/ssl/ssl-bundle.crt;
    ssl_certificate_key /usr/local/etc/nginx/ssl/server.key;

    # HEADERS SECURITY RELATED
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
    add_header Referrer-Policy "no-referrer";

    # HEADERS
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;

    # PATH TO THE ROOT OF YOUR INSTALLATION
    root /usr/local/www/nextcloud/;

    location = /robots.txt {
      allow all;
      log_not_found off;
      access_log off;
    }

    location = /.well-known/carddav {
      return 301 $scheme://$host/remote.php/dav;
    }

    location = /.well-known/caldav {
      return 301 $scheme://$host/remote.php/dav;
    }

    # BUFFERS TIMEOUTS UPLOAD SIZES
    client_max_body_size 16400M;
    client_body_buffer_size 1048576k;
    send_timeout 3000;

    # ENABLE GZIP BUT DO NOT REMOVE ETag HEADERS
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    location / {
      rewrite ^ /index.php$request_uri;
    }

    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
      deny all;
    }

    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
      deny all;
    }

    location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
      fastcgi_split_path_info ^(.+\.php)(/.*)$;
      include fastcgi_params;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_param PATH_INFO $fastcgi_path_info;
      fastcgi_param HTTPS on;
      fastcgi_param modHeadersAvailable true;
      fastcgi_param front_controller_active true;
      fastcgi_pass php-handler;
      fastcgi_intercept_errors on;
      fastcgi_request_buffering off;
      fastcgi_keep_conn off;
      fastcgi_buffers 16 256K;
      fastcgi_buffer_size 256k;
      fastcgi_busy_buffers_size 256k;
      fastcgi_temp_file_write_size 256k;
      fastcgi_send_timeout 3000s;
      fastcgi_read_timeout 3000s;
      fastcgi_connect_timeout 3000s;
    }

    location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
      try_files $uri/ =404;
      index index.php;
    }

    # ADDING THE CACHE CONTROL HEADER FOR JS AND CSS FILES
    # MAKE SURE IT IS BELOW PHP BLOCK
    location ~ \.(?:css|js|woff2?|svg|gif)$ {
      try_files $uri /index.php$uri$is_args$args;
      add_header Cache-Control "public, max-age=15778463";
      # HEADERS SECURITY RELATED
      # IT IS INTENDED TO HAVE THOSE DUPLICATED TO ONES ABOVE
      add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
      # HEADERS
      add_header X-Content-Type-Options nosniff;
      add_header X-XSS-Protection "1; mode=block";
      add_header X-Robots-Tag none;
      add_header X-Download-Options noopen;
      add_header X-Permitted-Cross-Domain-Policies none;
      # OPTIONAL: DONT LOG ACCESS TO ASSETS
      access_log off;
    }

    location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
      try_files $uri /index.php$uri$is_args$args;
      # OPTIONAL: DONT LOG ACCESS TO OTHER ASSETS
      access_log off;
    }
  }
}

OpenSSL HTTPS Certificates

We will generate a certificates needed for HTTPS service for Nextcloud.

# mkdir -p /usr/local/etc/nginx/ssl

# cd /usr/local/etc/nginx/ssl

# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
............+++++
....+++++
e is 65537 (0x010001)
Enter pass phrase for server.key: SERVER_KEY_PASSWORD
Verifying - Enter pass phrase for server.key: SERVER_KEY_PASSWORD

As usual use something more sensible then SERVER_KEY_PASSWORD string here πŸ™‚

# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:PL
State or Province Name (full name) [Some-State]:lodzkie
Locality Name (eg, city) []:Lodz
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Vermaden Enterprises Ltd.
Organizational Unit Name (eg, section) []:IT Department
Common Name (e.g. server FQDN or YOUR name) []:nextcloud.domain.com
Email Address []:vermaden@interia.pl

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:


# cp server.key server.key.orig

# openssl rsa -in server.key.orig -out server.key
Enter pass phrase for server.key.orig: SERVER_KEY_PASSWORD
writing RSA key

# ls -l /usr/local/etc/nginx/ssl
total 7
-rw-r--r--  1 root  wheel  1151 2019.12.31 12:39 server.csr
-rw-------  1 root  wheel  1679 2019.12.31 12:41 server.key
-rw-------  1 root  wheel  1751 2019.12.31 12:40 server.key.orig

# openssl x509 -req -days 7000 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=C = PL, ST = lodzkie, L = Lodz, O = Vermaden Enterprises Ltd., OU = IT Department, CN = nextcloud.domain.com, emailAddress = vermaden@interia.pl
Getting Private key

# ln -s /usr/local/etc/nginx/ssl/server.crt /usr/local/etc/nginx/ssl/ssl-bundle.crt

PHP

Here is the used PHP configuration with up to 16GB files for Nextcloud.

# grep '^[^;]' /usr/local/etc/php.ini
[PHP]
max_input_time=3600
engine = On
short_open_tag = On
precision = 14
output_buffering = OFF
zlib.output_compression = Off
implicit_flush = Off
unserialize_callback_func =
serialize_precision = 17
disable_functions =
disable_classes =
zend.enable_gc = On
expose_php = On
max_execution_time = 3600
max_input_time = 30000
memory_limit = 1024M
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off
report_memleaks = On
track_errors = Off
html_errors = On
error_log = /var/log/php.log
variables_order = "GPCS"
request_order = "GP"
register_argc_argv = Off
auto_globals_jit = On
post_max_size = 16400M
auto_prepend_file =
auto_append_file =
default_mimetype = "text/html"
default_charset = "UTF-8"
doc_root =
user_dir =
enable_dl = Off
file_uploads = On
upload_max_filesize = 16400M
max_file_uploads = 64
allow_url_fopen = On
allow_url_include = Off
default_socket_timeout = 300
[CLI Server]
cli_server.color = On
[Date]
date.timezone = Europe/Warsaw
[filter]
[iconv]
[intl]
[sqlite3]
[Pcre]
[Pdo]
[Pdo_mysql]
pdo_mysql.cache_size = 2000
pdo_mysql.default_socket=
[Phar]
[mail function]
SMTP = localhost
smtp_port = 25
mail.add_x_header = On
[SQL]
sql.safe_mode = Off
[ODBC]
odbc.allow_persistent = On
odbc.check_persistent = On
odbc.max_persistent = -1
odbc.max_links = -1
odbc.defaultlrl = 4096
odbc.defaultbinmode = 1
[Interbase]
ibase.allow_persistent = 1
ibase.max_persistent = -1
ibase.max_links = -1
ibase.timestampformat = "%Y-%m-%d %H:%M:%S"
ibase.dateformat = "%Y-%m-%d"
ibase.timeformat = "%H:%M:%S"
[MySQLi]
mysqli.max_persistent = -1
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
[mysqlnd]
mysqlnd.collect_statistics = On
mysqlnd.collect_memory_statistics = Off
[OCI8]
[PostgreSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
[bcmath]
bcmath.scale = 0
[browscap]
[Session]
session.save_handler = files
session.save_path = "/tmp"
session.use_strict_mode = 0
session.use_cookies = 1
session.use_only_cookies = 1
session.name = PHPSESSID
session.auto_start = 0
session.cookie_lifetime = 0
session.cookie_path = /
session.cookie_domain =
session.cookie_httponly =
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 1000
session.gc_maxlifetime = 1440
session.referer_check =
session.cache_limiter = nocache
session.cache_expire = 180
session.use_trans_sid = 0
session.hash_function = 0
session.hash_bits_per_character = 5
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"
[Assertion]
zend.assertions = -1
[COM]
[mbstring]
[gd]
[exif]
[Tidy]
tidy.clean_output = Off
[soap]
soap.wsdl_cache_enabled=1
soap.wsdl_cache_dir="/tmp"
soap.wsdl_cache_ttl=86400
soap.wsdl_cache_limit = 5
[sysvshm]
[ldap]
ldap.max_links = -1
[mcrypt]
[dba]
[opcache]
opcache.enable=1
opcache.enable_cli=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1
[curl]
[openssl] 

PHP PostgreSQL Database Settings

Below are needed to make PHP work with PostgreSQL database.

# cat /usr/local/etc/php/ext-20-pgsql.ini
extension=pgsql.so

# cat  /usr/local/etc/php/ext-20-pgsql.ini

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
EOF

# cat /usr/local/etc/php/ext-20-pgsql.ini
extension=pgsql.so

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0


… and the second one.

# cat /usr/local/etc/php/ext-30-pdo_pgsql.ini
extension=pdo_pgsql.so

# cat  /usr/local/etc/php/ext-30-pdo_pgsql.ini

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
EOF

# cat /usr/local/etc/php/ext-30-pdo_pgsql.ini
extension=pdo_pgsql.so

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0

PHP FPM

Now the PHP FPM daemon.

# grep '^[^;]' /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
error_log = log/php-fpm.log
syslog.facility = daemon
include=/usr/local/etc/php-fpm.d/*.conf

# touch /var/log/php-fpm.log

# chown www:www /var/log/php-fpm.log

# grep '^[^;]' /usr/local/etc/php-fpm.d/www.conf
[www]
user = www
group = www
listen = 127.0.0.1:9000
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = static
pm.max_children = 8
pm.start_servers = 4
pm.min_spare_servers = 4
pm.max_spare_servers = 32
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Start Backend Services

We will now start all ‘backend’ services needed for Nextcloud.

# service postgresql start
2020-01-02 13:18:05.970 CET [52233] LOG:  starting PostgreSQL 12.1 on amd64-portbld-freebsd12.0, compiled by FreeBSD clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM 6.0.1), 64-bit
2020-01-02 13:18:05.974 CET [52233] LOG:  listening on IPv6 address "::1", port 5432
2020-01-02 13:18:05.974 CET [52233] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2020-01-02 13:18:05.975 CET [52233] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-01-02 13:18:06.024 CET [52233] LOG:  ending log output to stderr
2020-01-02 13:18:06.024 CET [52233] HINT:  Future log output will go to log destination "syslog".

# service postgresql status
pg_ctl: server is running (PID: 36089)
/usr/local/bin/postgres "-D" "/var/db/postgres/data12"

# service php-fpm start
Performing sanity check on php-fpm configuration:
[02-Jan-2020 13:16:50] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

# service php-fpm status
php_fpm is running as pid 52193.

# service memcached start
Starting memcached.

# service memcached status
memcached is running as pid 52273.

# service nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Nextcloud Configuration

I created a link named /data to the Nextcloud data directory located at /usr/local/www/nextcloud/data place.

# ln -s /usr/local/www/nextcloud/data /data

The we use Firefox or other web browser to finish the Nextcloud configuration.

Type https://1.2.3.4 in the browser where 1.2.3.4 is your Nextcloud instance IP address.

I am sorry but the following image is in the Polish language – I forgot to change it to English … but I assume you will what to put in these fields by context.

nextcloud-setup.png

After we finish the setup we go straight to Nextcloud Overview page at https://1.2.3.4/settings/admin/serverinfoto page to see what else needs to be taken care of.

nextcloud-setup-overview.png

Two issues needs to be addressed. One is about Nginx configuration, the other is about PostgreSQL, let’s fix them.

We will add needed header to the Nginx configuration file.

# diff -u /usr/local/etc/nginx/nginx.conf.OLD /usr/local/etc/nginx/nginx.conf
--- /usr/local/etc/nginx/nginx.conf.OLD  2020-01-02 14:21:58.359398000 +0100
+++ /usr/local/etc/nginx/nginx.conf      2020-01-02 14:21:42.823426000 +0100
@@ -46,6 +46,7 @@
     add_header X-Robots-Tag none;
     add_header X-Download-Options noopen;
     add_header X-Permitted-Cross-Domain-Policies none;
+    add_header X-Frame-Options "SAMEORIGIN";

     # PATH TO THE ROOT OF YOUR INSTALLATION
     root /usr/local/www/nextcloud/;

# service nginx reload
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

… and update the PostgreSQL convertion.

# sudo -u www /usr/local/bin/php /usr/local/www/nextcloud/occ db:convert-filecache-bigint
Following columns will be updated:

* mounts.storage_id
* mounts.root_id
* mounts.mount_id

This can take up to hours, depending on the number of files in your instance!
Continue with the conversion (y/n)? [n] y

Viola! Both of our problems are gone now.

nextcloud-setup-overview-fixed.png

Trusted Domains

When you will enter the Nextcloud using different domain you will get a warning about that.

To add new Trusted Domain to the Nextcloud config do the following.

Here is how it looks before changes.

# grep -A 3 trusted /usr/local/www/nextcloud/config/config.php
  'trusted_domains' =>
  array (
    0 => '1.2.3.4',
  ),

We will now add nextcloud.domain.com domain.

# vi /usr/local/www/nextcloud/config/config.php

# grep -A 4 trusted /usr/local/www/nextcloud/config/config.php
  'trusted_domains' =>
  array (
    0 => '1.2.3.4',
    1 => 'nextcloud.domain.com',
  ),

You can of course add more with successive numbers.

# grep -A 5 trusted /usr/local/www/nextcloud/config/config.php
  'trusted_domains' =>
  array (
    0 => '1.2.3.4',
    1 => 'nextcloud.domain.com',
    2 => 'cloud.domain.com',
  ),

This is the end of this guide. Feel free to share your thougths πŸ™‚

Log Rotation with Newsyslog

Newsyslog is part of FreeBSD’s base system. We will add Nextcloud and backend daemons log files to Newsyslog configuration so they will be rotated.

 
# cat  /etc/newsyslog.conf
/data/nextcloud.log                          www:www     640  7     *    @T00  JC
/usr/local/www/nextcloud/data/nextcloud.log  www:www     640  7     *    @T00  JC
/var/log/php-fpm.log                         www:www     640  7     *    @T00  JC
/var/log/nginx/error.log                     www:www     640  7     *    @T00  JC
/var/log/nginx/access.log                    www:www     640  7     *    @T00  JC
EOF

Now you will not run out of free space when logs will grow in time.

EOF

Β 

Less Known pkg(8) Features

I was asked many times to write an article about pkg(8) – the current FreeBSD modern package manager sometimes also called PKGng.

In this entry I will try to describe less known pkg(8) features.

About 8 years ago – when pkg(8) did not even existed – I wrote HOWTO: keeping FreeBSD’s base system and packages up-to-date post. It was even later published in the BSD Magazine 2012/01 episode (Issue 30).

Back in 2011 keeping packages up to date was little more tricky then it is now. You was forced to use the FreeBSD’s STABLE branch for them as packages in RELEASE were never updated – like currently it is in the OpenBSD world. The packages in FreeBSD’s STABLE branch were built every 2 weeks which was enough at that time.

You could of course compile everything from FreeBSD Ports using portmaster but you will waste lots of time for compiling your life. When pkg_add/pkg_delete/pkg_info were THE package tools on FreeBSD the pkg_upgrade script from the bsdadminscripts package was quite helpful with the upgrade process. It would fetch latest available packages from the STABLE branch FTP server and update installed packages. To check for the security issues in packages another external tools called portaudit was needed.

Today we have pkg(8) with all its features along with pkg upgrade to update the installed packages. Thanks to pkg audit the third party tool portaudit is not longer needed. We even have pkg autoremove to automatically remove unneeded dependencies.

I will try not to copy information available on the already great FreeBSD Handbook described in the 4.4. Using pkg for Binary Package Management chapter.

Older FreeBSD Versions

Before FreeBSD 10.x to use new pkg(8) tools instead of the old pkg_* ones there was need to have WITH_PKGNG=yes in the /etc/make.conf file.

Currently only the only supported releases of FreeBSD are recently released 12.0 and still more stable and polished 11.2 so there is no need to put anything in the /etc/make.conf file anymore to use pkg(8) framework.

Database

The pkg(8) database (SQLite database actually) is kept in the /var/db/pkg directory.

These are the contents of the /var/db/pkg dir just after pkg(8) bootstrap process.

# find /var/db/pkg
/var/db/pkg
/var/db/pkg/FreeBSD.meta
/var/db/pkg/vuln.xml
/var/db/pkg/local.sqlite
/var/db/pkg/repo-FreeBSD.sqlite

The most important file is the /var/db/pkg/local.sqlite file as this is the database of installed packages and its files. By typing pkg shell you can actually connect to this SQLite database with SQLite interpreter.

# pkg shell
-- Loading resources from /home/vermaden/.sqliterc
SQLite version 3.15.2 2016-11-28 19:13:37
Enter ".help" for usage hints.
> .q
#

If for some reason you will find that pkg(8) tools does not work or are broken you may connect to it with sqlite3 command from the sqlite3 package. Do not use the sqlite package as it holds the 2.x version of SQLite which is not forward compatible with the 3.x version used by pkg(8)

# file /var/db/pkg/*
/var/db/pkg/FreeBSD.meta:        ASCII text
/var/db/pkg/local.sqlite:        SQLite 3.x database, user version 34, last written using SQLite version 3015002
/var/db/pkg/repo-FreeBSD.sqlite: SQLite 3.x database, user version 2014, last written using SQLite version 3015002
/var/db/pkg/vuln.xml:            XML 1.0 document, UTF-8 Unicode text, with very long lines

# sqlite3 /var/db/pkg/local.sqlite
-- Loading resources from /home/vermaden/.sqliterc
SQLite version 3.26.0 2018-12-01 12:34:55
Enter ".help" for usage hints.
> .q
#

Lock/Unlock

With pkg(8) specified packages can now be locked with pkg lock command. This means that the pkg upgrade or even pkg delete operations (or pkg autoremove) would not touch them. You can list locked packages with -l options as shown below.

# pkg lock -l
Currently locked packages:
conky-1.9.0_6
exfat-utils-1.2.8
ffmpeg-4.1_1,1
fusefs-exfat-1.2.8
lame-3.100_2

# pkg delete exfat-utils
Checking integrity... done (0 conflicting)
The following package(s) are locked and may not be removed:

        exfat-utils

1 packages requested for removal: 1 locked, 0 missing
# 

As you can see its not possible to pkg delete the locked exfat-utils package. You will first have to unlock it with pkg unlock command. You can do that interactively or not with -y option as shown below.

# pkg unlock exfat-utils
exfat-utils-1.2.8: unlock this package? [y/N]: y
Unlocking exfat-utils-1.2.8

# pkg lock -y exfat-utils
Locking exfat-utils-1.2.8

Now, why would you lock any packages?

Based on my experience these are potential reasons to lock certain packages:

  • You combine packages with ports.
  • Package for the port does not exist.
  • Official package has different default options then yours.
  • You really want to use older version of package.

Actually I use lock/unlock mechanism because all of the above are true for me.

I combine ports and packages (practice often discouraged in the FreeBSD world) because some software I use is not available as packages – because of licensing issues. These are anything related to Microsoft exFAT filesystem (exfat-utils/fusefs-exfat) and MP3 (lame). What is more astonishing for me is that OpenBSD provides lame package since YEARS yet FreeBSD team is still scared of the patents. I also need to build custom version of ffmpeg package – just to include lame support but still custom. The last thing I keep locked is Conky. It was and still is working great in 1.9 version but its developers broke it badly in the 1.10 version (now even 1.11 is available). It was just not possible to right click with mouse on the desktop and have Openbox menu – or to name the issue – Conky did not pass mouse events to the Window Manager that ruled the desktop. So I used one of the other Ports tools, the portdowngrade to fetch last 1.9 files into my Ports tree, then compile the 1.9 conky package and lock it for good.

You probably already know that I prefer to run dzen2 for screen information but I use conky rarely for my ‘FreeBSD Dashboard’ with all needed information that I enable only when I need it – with [Scroll Lock] key.

For the record – here is how it looks.

vermaden_2019-01-16_21-42-52.png

Provides

If you also happen to be RHEL/Fedora (or just yum/rpm) user you probably missed the ‘provides’ feature on FreeBSD pkg(8) package manager. Why it is so useful? Because with ‘provides’ database you can install packages by specifying the exact binary or file name of the package. For example You can type yum install /sbin/ifconfig to install net-tools package because ‘provides’ database will have that needed information.

What if I tell you that You can achieve similar functionality with pkg(8) tool?

The pkg-provides plugin allows you to query which package provides a particular file directly with pkg(8) tool.

It is even available as pkg-provides package. Below I will show you how to install and configure it. First install the pkg-provides package.

# pkg search provides
pkg-provides-0.5.0             Pkg plugin for querying which package provides a particular file

# pkg install pkg-provides
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        pkg-provides: 0.5.0 [FreeBSD]

Number of packages to be installed: 1

10 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/1] Fetching pkg-provides-0.5.0.txz: 100%   10 KiB   9.8kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing pkg-provides-0.5.0...
[1/1] Extracting pkg-provides-0.5.0: 100%
Message from pkg-provides-0.5.0:

======================= pkg plugin activation ========================
  In order to use the pkg-provides plugin you need to enable plugins in pkg.
  To do this, uncomment the following lines in /usr/local/etc/pkg.conf file
  and add pkg-provides to the supported plugin list

  PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
  PKG_ENABLE_PLUGINS = true;
  PLUGINS [ provides ];

  After that run `pkg plugins' to see the plugins handled by pkg`.

  To update the provides database run `pkg provides -u`

  ====================================================================

Then configure the /usr/local/etc/pkg.conf file.

# cat << __EOF__ >> /usr/local/etc/pkg.conf
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];
__EOF__
Now you have new command called pkg provides as shown below.
# pkg provides
usage: pkg provides [-uf] pattern

A plugin for querying which package provides a particular file

# pkg provides bin/pldd
Provides database not found, please update first.

You can update the ‘provides’ database with -u option.

# pkg provides -u
Fetching provides database: 100%   29 MiB 700.9kB/s    00:43    
Extracting database....success

Example usage of pkg provides plugin.

# pkg provides bin/pldd
Name    : ptools2-0.5
Desc    : Toolset based on Solaris ptools functionality
Repo    : FreeBSD
Filename: /usr/local/bin/pldd

Name    : linux_base-c7-7.4.1708_6
Desc    : Base set of packages needed in Linux mode (Linux CentOS 7.4.1708)
Repo    : FreeBSD
Filename: /compat/linux/usr/bin/pldd

# pkg install /compat/linux/usr/bin/pldd
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching '/compat/linux/usr/bin/pldd' have been found in the repositories

Althou its not possible to for example install linux_base-c7 package by typing pkg install /compat/linux/usr/bin/pldd command its possible to check which package contains that file.

Next time you will type the pkg upgrade command you would also see provides database updating

# pkg upgrade
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    6 MiB 376.5kB/s    00:18    
Processing entries: 100%
Fetching provides database: 100%   29 MiB 386.3kB/s    01:18    
Extracting database....success
FreeBSD repository update completed. 32542 packages processed.
All repositories are up to date.
Checking integrity... done (0 conflicting)
(...)

The pkg provides database takes some notable space in the /var/db/pkg directory.

# file /var/db/pkg/* /var/db/pkg/*/* | sort -n
/var/db/pkg/FreeBSD.meta: ASCII text
/var/db/pkg/local.sqlite: SQLite 3.x database, user version 34, last written using SQLite version 3015002
/var/db/pkg/provides: directory
/var/db/pkg/provides/provides.db: ASCII text
/var/db/pkg/repo-FreeBSD.sqlite: SQLite 3.x database, user version 2014, last written using SQLite version 3015002
/var/db/pkg/vuln.xml: XML 1.0 document, UTF-8 Unicode text, with very long lines

If you use ZFS compression like LZ4 then it will not take much as shown below.

# du -csm /var/db/pkg/*
1       /var/db/pkg/FreeBSD.meta
32      /var/db/pkg/local.sqlite
72      /var/db/pkg/provides
33      /var/db/pkg/repo-FreeBSD.sqlite
2       /var/db/pkg/vuln.xml
138     total

… but if You use UFS then that almost 600 MB database may scare you a little πŸ™‚

# du -csmA /var/db/pkg/*
1       /var/db/pkg/FreeBSD.meta
68      /var/db/pkg/local.sqlite
571     /var/db/pkg/provides
52      /var/db/pkg/repo-FreeBSD.sqlite
6       /var/db/pkg/vuln.xml
694     total

Which

While the pkg provides needed information for the files of packages that are not yet installed the pkg which command is the pkg(8) equivalent of the classic UNIX which command. It shows to which package a file belongs to (or not at all).

# pkg which /boot/modules/drm.ko
/boot/modules/drm.ko was installed by package drm-fbsd11.2-kmod-4.11g20181210

# pkg which /boot/kernel/drm.ko
/boot/kernel/drm.ko was not found in the database

Double Your Gun Double Your Fun

Sometimes its faster to use both ‘whiches’ at the same time to get the needed answer.

# which firefox
/usr/local/bin/firefox

# pkg which `which firefox`
/usr/local/bin/firefox was installed by package firefox-64.0.2,1

Periodic

It may happen that you will see something like that one below.

# pkg install parallel
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: Cannot get an advisory lock on a database, it is locked by another process

… but You did not launched any other pkg(8) instances, what is going on here? Lets check the ps(1) output.

# ps ax | grep pkg
 8540  -  S        0:00.00 /bin/sh - /usr/local/etc/periodic/daily/411.pkg-backup
 8551  -  S        0:00.00 /usr/local/sbin/pkg shell .dump
 8555  -  D        0:01.08 /usr/local/sbin/pkg shell .dump

The FreeBSD’s periodic scripts are doing their job.

To check which are they look here.

# find /etc/periodic /usr/local/etc/periodic -name \*pkg\*
/usr/local/etc/periodic/daily/490.status-pkg-changes
/usr/local/etc/periodic/daily/411.pkg-backup
/usr/local/etc/periodic/security/460.pkg-checksum
/usr/local/etc/periodic/security/410.pkg-audit
/usr/local/etc/periodic/weekly/400.status-pkg

If You think that any of those activities are not needed then you may disable them with these values in the /etc/periodic.conf file.

# find /etc/periodic /usr/local/etc/periodic -name \*pkg\* | xargs grep -m 1 -E -o "[a-z_]+_enable" 
/usr/local/etc/periodic/daily/490.status-pkg-changes:daily_status_pkgng_changes_enable
/usr/local/etc/periodic/daily/411.pkg-backup:daily_backup_pkgng_enable
/usr/local/etc/periodic/security/460.pkg-checksum:security_status_pkgchecksum_enable
/usr/local/etc/periodic/security/410.pkg-audit:security_status_pkgaudit_enable
/usr/local/etc/periodic/weekly/400.status-pkg:weekly_status_pkgng_enable

For example if you would like to disable the /usr/local/etc/periodic/daily/490.status-pkg-changes execution you will need to add daily_status_pkgng_changes_enable=yes into the /etc/periodic.conf file.

Lefs chack again for the ps(1) output then.

# ps ax | grep pkg
 8574  0  S+       0:00.00 grep --color pkg

The periodic job has already finished. You may now install your package as usual.

# pkg install parallel
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        parallel: 20171222

Number of packages to be installed: 1

The process will require 3 MiB more space.
1 MiB to be downloaded.

Proceed with this action? [y/N]: n
# 

Stats

While the pkg stats command provides some stats on the installed packages its not that useful to find which packages take most space.

# pkg stats
Local package database:
        Installed packages: 1081
        Disk space occupied: 9 GiB

Remote package database(s):
        Number of repositories: 1
        Packages available: 32518
        Unique packages: 32518
        Total size of packages: 78 GiB

There is also pkg size command that will only display space used by packages but without package name … not very useful.

# pkg size | head
10.5MiB
2.06MiB
27.4MiB
2.59MiB
5.17MiB
515KiB
23.2MiB
609KiB
587KiB
127KiB

Also the man page for pkg size does not exist.

# man pkg-size
No manual entry for pkg-size

You can use pkg info -as command but it will not only not sort its output in any way – it will also display the space usage in various units like KiB/MiB/GiB which does not help … fortunatelly -h option of sort comes with help.

Using following alias you can sort packages by its space usage. I limited the output to 20 largest packages but feel free to change it to your needs.

# alias pkg-size='pkg info -as | sort -k 2 -h | tail -20 | column -t'
# which pkg-size
pkg-size: aliased to pkg info -as | sort -k 2 -h | tail -20 | column -t
# pkg-size
python27-2.7.15          68.2MiB
gtk3-3.22.30_4           68.8MiB
opencollada-1.6.68_1     75.8MiB
py27-ansible-2.7.5       88.6MiB
argyllcms-1.9.2_4        92.4MiB
webkit2-gtk3-2.22.5      92.9MiB
gimp-app-2.10.8_1,1      95.4MiB
python36-3.6.8           104MiB
samba47-4.7.12           145MiB
openjdk8-8.192.26_3      162MiB
boost-libs-1.69.0        163MiB
thunderbird-60.4.0_1     167MiB
firefox-64.0.2,1         174MiB
binutils-2.30_7,1        195MiB
linux_base-c6-6.10       197MiB
gcc6-6.5.0_3             241MiB
chromium-71.0.3578.98_2  251MiB
libreoffice-6.0.7_4      353MiB
virtualbox-ose-5.2.22_2  375MiB
llvm60-6.0.1_5           818MiB

Short Names

The pkg(8) tools also support short names for the arguments. For example you do not have to type pkg autoremove. Only the pkg autor part is needed for the command to work.

Example short names blow.

# pkg autor
# pkg upg
# pkg inf

Metadata

vermaden_2019-01-16_21-32-07.png

Many problems with pkg(8) are triggered by old metadata database. In case you face any pkg(8) issue first update (forcefully) its database as shown below.

# pkg update -f
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    6 MiB 352.9kB/s    00:19    
Processing entries: 100%
Fetching provides database: 100%   28 MiB 658.3kB/s    00:44    
Extracting database....success
FreeBSD repository update completed. 31778 packages processed.
All repositories are up to date.

For the record – the ‘provides’ database is also updated in such process.

Fixing Broken Dependency

There was time when one missing dependency about vulnerable www/libxul19 package started to torture me for some time.

I was even despered to compile everything with portmaster already.

I started with portmaster --check-depends command, but said no ‘n‘ when asked for fix as it will downgrade a lot of packages needlessly.

# portmaster --check-depends
(...)
Checking dependencies: evince
graphics/evince has a missing dependency: www/libxul19
(...)

>>> Missing package dependencies were detected.
>>> Found 1 issue(s) in total with your package database.

The following packages will be installed:

        Downgrading perl: 5.14.2_3 -> 5.14.2_2
        Downgrading glib: 2.34.3 -> 2.28.8_5
        Downgrading gio-fam-backend: 2.34.3 -> 2.28.8_1
        Downgrading libffi: 3.0.12 -> 3.0.11
        Downgrading gobject-introspection: 1.34.2 -> 0.10.8_3
        Downgrading atk: 2.6.0 -> 2.0.1
        Downgrading gdk-pixbuf2: 2.26.5 -> 2.23.5_3
        Downgrading pango: 1.30.1 -> 1.28.4_1
        Downgrading gtk-update-icon-cache: 2.24.17 -> 2.24.6_1
        Downgrading dbus: 1.6.8 -> 1.4.14_4
        Downgrading gtk: 2.24.17 -> 2.24.6_2
        Downgrading dbus-glib: 0.100.1 -> 0.94
        Installing libxul: 1.9.2.28_1

The installation will require 66 MB more space

38 MB to be downloaded

>>> Try to fix the missing dependencies [y/N]: n
>>> Summary of actions performed:

www/libxul19 dependency failed to be fixed

>>> There are still missing dependencies.
>>> You are advised to try fixing them manually.

>>> Also make sure to check 'pkg updating' for known issues.

Lets see what pkg(8) shows we have installed.

# pkg info | grep libxul
libxul-10.0.12                 Mozilla runtime package that can be used to bootstrap XUL+XPCOM apps

# pkg info -qoa | grep libxul
www/libxul

So the problem is that we have installed www/libxul instead of www/libxul19 and that is why portmaster (and not only) complains about it.

Before pkg(8) was introduced it was easy just to grep -r the entire /var/db/pkg directory with its ‘file database’ but now its quite more complicated as the package database is kept in SQLite database. Using pkg shell command You can connect to that database. Lets check what we can find there.

# pkg shell
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .databases
seq  name             file
---  ---------------  ----------------------------------------------------------
0    main             /var/db/pkg/local.sqlite
sqlite> .tables
categories       licenses         pkg_directories  scripts
deps             mtree            pkg_groups       shlibs
directories      options          pkg_licenses     users
files            packages         pkg_shlibs
groups           pkg_categories   pkg_users
sqlite> .header on
sqlite> .mode column
sqlite> pragma table_info(deps);
cid         name        type        notnull     dflt_value  pk
----------  ----------  ----------  ----------  ----------  ----------
0           origin      TEXT        1                       1
1           name        TEXT        1                       0
2           version     TEXT        1                       0
3           package_id  INTEGER     0                       1
sqlite> .quit

So now we know that ‘deps‘ table is probably what we are looking for ;).

As pkg shell is quite limited for SQLite ‘browsing’ I will use the sqlite3 command itself. By limited I mean that You can not type pkg shell "select * from deps;" query, You first need to start pkg shell and then You can type your query.

# sqlite3 -column /var/db/pkg/local.sqlite "select * from deps;" | grep libxul
www/libxul19   libxul      1.9.2.28_1  104

The second column is name so lets try to use it.

sqlite3 -header -column /var/db/pkg/local.sqlite "select * from deps where name='libxul';"
origin        name        version     package_id
------------  ----------  ----------  ----------
www/libxul19  libxul      1.9.2.28_1  104

So now we have the ‘problematic’ dependency entry nailed, lets modify it a little to the real installed packages state.

# sqlite3 /var/db/pkg/local.sqlite "update deps set origin='www/libxul' where name='libxul';"
# sqlite3 /var/db/pkg/local.sqlite "update deps set version='10.0.12' where name='libxul';"

You can of course use the ‘official’ way by using the pkg shell command.

# pkg shell
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> update deps set origin='www/libxul' where name='libxul';
sqlite> update deps set version='10.0.12' where name='libxul';
sqlite> .header on
sqlite> .mode column
sqlite> select * from deps where name='libxul';
origin      name        version     package_id
----------  ----------  ----------  ----------
www/libxul  libxul      10.0.12     104
sqlite> .quit

Now portmaster is happy and does not complain about any missing dependencies.

# portmaster --check-depends
(...)
Checking dependencies: zenity
Checking dependencies: zip
Checking dependencies: zsh
# 

Viola! Problem solved πŸ˜‰

… but pkg(8) has a tool for that already πŸ™‚

Its called pkg set and two most useful options from man pkg-set are.

  -n oldname:newname, --change-name oldname:newname
       Change the package name of a given dependency from oldname to newname.

(...)

  -o oldorigin:neworigin, --change-origin oldorigin:neworigin
       Change the port origin of a given dependency from oldorigin to neworigin.
       This corresponds to the port directory that the package originated from.
       Typically, this is only needed for upgrading a library or package that
       has MOVED or when the default version of a major port dependency changes.
       (DEPRECATED) Usually this will be explained in /usr/ports/UPDATING.
       Also see pkg-updating(8) and EXAMPLES.

In our case we would use pkg set -o www/libxul19:www/libxul command.

Not sure if it will solve that problem in the same way as I also updated the version in the database.

UPDATING

If you get into any trouble with the pkg upgrade command then you should also check latest version of the /usr/ports/UPDATING file – available after updating the Ports tree with portsnap fetch update command for example.

It describes what important has changed in Ports (and packages as packages are built from Ports).

# less /usr/ports/UPDATING

(...)
20180518:
  AFFECTS: users of sysutils/ansible*
  AUTHOR: lifanov@FreeBSD.org

  Ansible ports are now flavored. Package names for Ansible changed
  to include python version. Poudriere and package users don't need
  to do anything.

  To rename an installed package to match the new naming scheme,
  for example, for ansible24, run:

   # pkg set -n ansible24:py27-ansible24

(...)

20180214:
  AFFECTS: users of lang/ruby23
  AUTHOR: swills@FreeBSD.org

  The default ruby version has been updated from 2.3 to 2.4.

  If you compile your own ports you may keep 2.3 as the default version by
  adding the following lines to your /etc/make.conf file:

  #
  # Keep ruby 2.3 as default version
  #
  DEFAULT_VERSIONS+=ruby=2.3

  If you wish to update to the new default version, you need to first stop any
  software that uses ruby. Then, you will need to follow these steps, depending
  upon how you manage your system.

  If you use pkgng, simply upgrade:
  # pkg upgrade

  If you use portmaster, install new ruby, then rebuild all ports that depend
  on ruby:
  # portmaster -o lang/ruby24 lang/ruby23
  # portmaster -R -r ruby-2.4

  If you use portupgrade, install new ruby, then rebuild all ports that depend
  on ruby:

  # pkg delete -f ruby portupgrade
  # make -C /usr/ports/ports-mgmt/portupgrade install clean
  # pkg set -o lang/ruby23:lang/ruby24
  # portupgrade -x ruby-2.4.\* -fr lang/ruby24

(...)

The pkg(8) framework also has a tool for that with pkg updating command. Check man pkg-updating page for details. The most common use case would be using the -d argument with date as shown below.

# pkg updating -d 20190101
20190103:
  AFFECTS: users of multimedia/vlc*
  AUTHOR: riggs@FreeBSD.org

  The multimedia/vlc port has been upgraded to 3.0.5, the latest upstream
  release. Subsequently, multimedia/vlc-qt4 and multimedia/vlc3 have been
  retired and removed from the ports tree. Users who previously used
  multimedia/vlc3 might want to switch to multimedia/vlc with the following
  commands:

  # pkg install multimedia/vlc
    or
  # portmaster -o multimedia/vlc multimedia/vlc3
    or
  # portupgrade -o multimedia/vlc multimedia/vlc3

You may as well check the UPDATING file online at the https://www.freshports.org/UPDATING address.

Bulletproof Upgrades with ZFS Boot Environments

To be absolutely sure that you will have a working system no matter what will went wrong with the pkg upgrade command just use the ZFS Boot Environments. I have made talks in Poland at PBUG and in Netherlands at NLUUG about its features not so long ago. The latest PDF presentation is still available at the https://is.gd/BECTL link.

The procedure with beadm command looks like that.

# beadm create safepoint
Created successfully

# beadm list
BE           Active Mountpoint  Space Created
11.2-RELEASE NR     /            5.7G 2018-12-01 13:09
safepoint    -      -          316.0K 2019-01-16 23:03

# pkg upgrade

Now if anything wrong will not happen You still have fully working system under the safepoint boot environment name.

Just reboot into it (select it in the FreeBSD loader) and you are back with working system, like you would be back in time with time machine.

Query

You can also use pkg query command to seek for intormation you need.

For example to ’emulate’ the pkg info -r pkg-name argument which displays the list of packages which require pkg-name you can use pkg query command as shown below.

# pkg info -r sqlite3
sqlite3-3.26.0:
        colord-gtk-0.1.26
        py27-sqlite3-2.7.15_7
        freeciv-2.5.10
        colord-1.3.5
        libsoup-2.62.3
        libsoup-gnome-2.62.3
        subversion-1.11.0_1
        nss-3.41_1
        webkit-gtk2-2.4.11_19
        filezilla-3.36.0_1
        epiphany-3.28.5_1
        darktable-2.4.4_3
        aria2-1.34.0_1
        webkit2-gtk3-2.22.5
        qt5-webkit-5.212.0.a2_17
        qt5-sqldrivers-sqlite3-5.12.0
        hugin-2018.0.0_6
        pidgin-2.13.0
        thunderbird-60.4.0_1
        midori-0.7.0
        firefox-64.0.2,1

# pkg query -e '%n = sqlite3' %ro
graphics/colord-gtk
databases/py-sqlite3
games/freeciv
graphics/colord
devel/libsoup
devel/libsoup-gnome
devel/subversion
security/nss
www/webkit-gtk2
ftp/filezilla
www/epiphany
graphics/darktable
www/aria2
www/webkit2-gtk3
www/qt5-webkit
databases/qt5-sqldrivers-sqlite3
graphics/hugin
net-im/pidgin
mail/thunderbird
www/midori
www/firefox

If you would like to know when each package was installed for the first time then use this spell below.

# pkg query "%t %n-%v" \
    | sort -n \
    | while read timestamp pkgname
      do
        echo "$(date -r $timestamp) $pkgname"
      done | ( head; echo; tail )
Fri Jul  7 14:17:29 CEST 2017 libpciaccess-0.13.5
Fri Jul  7 14:17:35 CEST 2017 libedit-3.1.20170329_2,1
Fri Jul  7 14:18:09 CEST 2017 font-util-1.3.1
Fri Jul  7 14:18:10 CEST 2017 xcb-util-0.4.0_2,1
Fri Jul  7 15:26:56 CEST 2017 xcb-util-renderutil-0.3.9_1
Fri Jul  7 15:26:57 CEST 2017 dejavu-2.37
Fri Jul  7 15:27:00 CEST 2017 font-misc-meltho-1.0.3_3
Fri Jul  7 15:27:02 CEST 2017 font-misc-ethiopic-1.0.3_3
Fri Jul  7 15:27:06 CEST 2017 font-bh-ttf-1.0.3_3
Fri Jul  7 15:27:08 CEST 2017 tpm-emulator-0.7.4_2

Sun Jan 13 20:48:01 CET 2019 firefox-64.0.2,1
Sun Jan 13 20:48:01 CET 2019 htop-2.2.0_1
Wed Jan 16 23:08:21 CET 2019 vlc-3.0.6,4
Wed Jan 16 23:08:21 CET 2019 xdg-utils-1.1.3
Wed Jan 16 23:08:25 CET 2019 phonon-qt4-4.10.2
Wed Jan 16 23:08:25 CET 2019 physfs-3.0.1
Wed Jan 16 23:08:25 CET 2019 py27-pyasn1-0.4.5
Wed Jan 16 23:08:26 CET 2019 chromium-71.0.3578.98_2
Wed Jan 16 23:08:26 CET 2019 moreutils-0.63
Wed Jan 16 23:08:26 CET 2019 p5-URI-1.76

You can also display packages that will not be removed by pkg autoremove command because You installed them directly.

# pkg query -e "%a != 1" "%n" | tail
xmp
xorg
xprintidle
xterm
xxkb
youtube_dl
zenity
zfs-stats
zip
zsh

Rosetta Stone

The FreeBSD Wiki page also provides some table but the information is incomplete.

Thus I copied the table and filled the missing data.

Below you will find the updated Rosetta Stone between old pkg_* tools compared to current pkg(8) framework.

Function Old pkg_* Tools New pkg(8) Tools
List of installed packages. pkg_info pkg info
Basic info about package. pkg_info pkgname-pkgversion pkg info pkgname
pkg info category/name
pkg info pkgname-pkgversion
Detailed info about package. N/A pkg info -f pkgname
pkg info -f category/name
pkg info -f pkgname-pkgversion
List all files in installed package. pkg_info -L pkgname-pkgversion pkg info -l pkgname
pkg info -l category/name
pkg info -l pkgname-pkgversion
Find which package provides file. pkg_info -W /path/to/my/file pkg which /path/to/my/file
Install local package. pkg_add ./localpkg.tbz pkg add ./localpkg.txz
Install remote package. pkg_add -r mypackage pkg install mypackage
pkg install category/name
pkg install pkgname-pkgversion
Search for remote package. ls /usr/ports/* | grep mypackage pkg search mypackage
pkg search category/name
pkg search pkgname-pkgversion
Search for detailed info about remote package. make search name=mypackage
make search key=mypackage
pkg search -f mypackage
pkg search -f category/name
pkg search -f pkgname-pkgversion
Reverse deps of installed package. pkg_info -R pkgname-pkgversion pkg info -r mypackage
pkg info -r category/name
pkg info -r pkgname-pkgversion
Deps of installed package. pkg_info -r pkgname-pkgversion pkg info -d mypackage
pkg info -d category/name
pkg info -d pkgname-pkgversion
Remove unused packages install as dep. N/A pkg autoremove
Binary upgrade installed packages. pkg_upgrade (FreeBSD Ports) pkg upgrade
Create remote repository. N/A pkg repo /directory/with/packages
Manipulate packages in jail. N/A pkg -j
Manipulate packages in chroot. pkg_add -C pkg -c
Info about installed packages using RE. pkg_info -x pkg info -x
Info about installed packages using extended RE. pkg_info -X pkg info -X
Info about installed packages using globbing. pkg_info pkg info -g
Check for known vulnerabilities. portaudit (FreeBSD Ports) pkg audit
Out of date packages. pkg_version -l < pkg version -l <
Out of date packages. pkg_version -Il < pkg version -Il <
Out of date packages compared to remote repo. N/A pkg upgrade -n
Statistic about installed packages. N/A pkg stat
Checking for missing dependency (with fix). N/A pkg check -d
Port origin. pkg_info -o pkg info -o

If you know any other useful pkg(8) spells then let me know πŸ™‚

EOF

 

GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel

Today I would like to present an article about setting up GlusterFS cluster on a FreeBSD system with Ansible and GNU Parallel tools.

gluster-logo.png

To cite Wikipedia “GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks.” The GlusterFS page describes it similarly “Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.”

Here are its advantages:

  • Scales to several petabytes.
  • Handles thousands of clients.
  • POSIX compatible.
  • Uses commodity hardware.
  • Can use any ondisk filesystem that supports extended attributes.
  • Accessible using industry standard protocols like NFS and SMB.
  • Provides replication/quotas/geo-replication/snapshots/bitrot detection.
  • Allows optimization for different workloads.
  • Open Source.

Lab Setup

It will be entirely VirtualBox based and it will consist of 6 hosts. To not create 6 same FreeBSD installations I used 12.0-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

I will use different prompts depending on where the command is executed to make the article more readable. Also then there is ‘%‘ at the prompt then a regular user is needed and if there is ‘#‘ at the prompt then a superuser is needed.

gluster1 #    // command run on the gluster1 node
gluster* #    // command run on all gluster nodes
client #      // command run on gluster client
vbhost %      // command run on the VirtualBox host

Here is the list of the machines for the GlusterFS cluster:

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

virtualbox-freebsd-gluster-host.jpg

Here is the configuration of the NAT Network on VirtualBox.

virtualbox-nat-network.jpg

The cloned/copied FreeBSD-12.0-RELEASE-amd64.vmdk image will need to have different UUIDs so we will use VBoxManage internalcommands sethduuid command to achieve this.

vbhost % for I in $( seq 6 ); do cp FreeBSD-12.0-RELEASE-amd64.vmdk    vbox_GlusterFS_${I}.vmdk; done
vbhost % for I in $( seq 6 ); do VBoxManage internalcommands sethduuid vbox_GlusterFS_${I}.vmdk; done

To start the whole GlusterFS environment on VirtualBox use these commands.

vbhost % VBoxManage list vms | grep GlusterFS
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Check which GlusterFS machines are running.

vbhost % VBoxManage list runningvms | grep GlusterFS
vbhost %

Starting of the machines in VirtualBox Headless mode in parallel.

vbhost % VBoxManage list vms \
           | grep GlusterFS \
           | awk -F \" '{print $2}' \
           | while read I; do VBoxManage startvm "${I}" --type headless & done

After that command you should see these machines running.

vbhost % VBoxManage list runningvms
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for gluster1 host.

gluster1 # cat /etc/rc.conf
hostname=gluster1
ifconfig_DEFAULT="inet 10.0.10.11/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD GlusterFS machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

gluster1 # grep '^PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin yes
# service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the gluster1 machine will be available on port 2211, the gluster2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

vbhost % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden VBoxNetNAT 57622 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 57622 19 tcp4   *:2211                *:*
vermaden VBoxNetNAT 57622 20 tcp4   *:2212                *:*
vermaden VBoxNetNAT 57622 21 tcp4   *:2213                *:*
vermaden VBoxNetNAT 57622 22 tcp4   *:2214                *:*
vermaden VBoxNetNAT 57622 23 tcp4   *:2215                *:*
vermaden VBoxNetNAT 57622 24 tcp4   *:2216                *:*
vermaden VBoxNetNAT 57622 28 tcp4   *:2240                *:*
vermaden VBoxNetNAT 57622 29 tcp4   *:9140                *:*
vermaden VBoxNetNAT 57622 30 tcp4   *:2220                *:*
root     sshd       96791 4  tcp4   *:22                  *:*

I think the corelation between IP address and the port on the host is obvious πŸ™‚

Here is the list of the machines with ports on localhost:

10.0.10.11 gluster1 2211
10.0.10.12 gluster2 2212
10.0.10.13 gluster3 2213
10.0.10.14 gluster4 2214
10.0.10.15 gluster5 2215
10.0.10.16 gluster6 2216

To connect to such machine from the VirtualBox host system you will need this command:

vbhost % ssh -l root localhost -p 2211

To not type that every time you need to login to gluster1 let’s make come changes to ~/.ssh/config file for convenience. This way it will be possible to login in very short way.

vbhost % ssh gluster1

Here is the modified ~/.ssh/config file.

vbhost % cat ~/.ssh/config
# GENERAL
  StrictHostKeyChecking no
  LogLevel              quiet
  KeepAlive             yes
  ServerAliveInterval   30
  VerifyHostKeyDNS      no

# ALL HOSTS SETTINGS
Host *
  StrictHostKeyChecking no
  Compression           yes

# GLUSTER
Host gluster1
  User root
  Hostname 127.0.0.1
  Port 2211

Host gluster2
  User root
  Hostname 127.0.0.1
  Port 2212

Host gluster3
  User root
  Hostname 127.0.0.1
  Port 2213

Host gluster4
  User root
  Hostname 127.0.0.1
  Port 2214

Host gluster5
  User root
  Hostname 127.0.0.1
  Port 2215

Host gluster6
  User root
  Hostname 127.0.0.1
  Port 2216

I assume that you already have some SSH keys generated (with ~/.ssh/id_rsa as private key) so lets remove the need to type password on each SSH login.

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster1
Password for root@gluster1:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster2
Password for root@gluster2:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster3
Password for root@gluster3:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster4
Password for root@gluster4:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster5
Password for root@gluster5:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster6
Password for root@gluster6:

Ansible Setup

As we already have SSH integration now we will configure Ansible to connect to out ‘localhost’ ports for FreeBSD machines.

Here is the Ansible’s hosts file.

vbhost % cat hosts
[gluster]
gluster1 ansible_port=2211 ansible_host=127.0.0.1 ansible_user=root
gluster2 ansible_port=2212 ansible_host=127.0.0.1 ansible_user=root
gluster3 ansible_port=2213 ansible_host=127.0.0.1 ansible_user=root
gluster4 ansible_port=2214 ansible_host=127.0.0.1 ansible_user=root
gluster5 ansible_port=2215 ansible_host=127.0.0.1 ansible_user=root
gluster6 ansible_port=2216 ansible_host=127.0.0.1 ansible_user=root

[gluster:vars]
ansible_python_interpreter=/usr/local/bin/python2.7

Here is the listing of these machines using ansible command.

vbhost % ansible -i hosts --list-hosts gluster
  hosts (6):
    gluster1
    gluster2
    gluster3
    gluster4
    gluster5
    gluster6

Lets verify that out Ansible setup works correctly.

vbhost % ansible -i hosts -m raw -a 'echo' gluster
gluster1 | CHANGED | rc=0 >>



gluster3 | CHANGED | rc=0 >>



gluster2 | CHANGED | rc=0 >>



gluster5 | CHANGED | rc=0 >>



gluster4 | CHANGED | rc=0 >>



gluster6 | CHANGED | rc=0 >>

It works as desired.

We are not able to use Ansible modules other then Raw because by default Python is not installed on FreeBSD as shown below.

vbhost % ansible -i hosts -m ping gluster
gluster1 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster2 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster4 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster5 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster3 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster6 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}

We need to get Python installed on FreeBSD.

We will partially use Ansible for this and partially the GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
pkg: Error fetching http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly/Latest/pkg.txz: No address record
A pre-built version of pkg could not be found for your system.
Consider changing PACKAGESITE or installing it from ports: 'ports-mgmt/pkg'.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly, please wait...

… we forgot about setting up DNS in the FreeBSD machines, let’s fix that.

It is as easy as executing echo nameserver 1.1.1.1 > /etc/resolv.conf command on each FreeBSD machine.

Lets verify what input will be sent to GNU Parallel before executing it.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done
ssh gluster1 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster2 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster3 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster4 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster5 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster6 'echo nameserver 1.1.1.1 > /etc/resolv.conf'

Looks reasonable, lets engage the GNU Parallel then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

We will now verify that the DNS is configured properly on the FreeBSD machines.

vbhost % for I in $( jot 6 ); do echo -n "gluster${I} "; ssh gluster${I} 'cat /etc/resolv.conf'; done
gluster1 nameserver 1.1.1.1
gluster2 nameserver 1.1.1.1
gluster3 nameserver 1.1.1.1
gluster4 nameserver 1.1.1.1
gluster5 nameserver 1.1.1.1
gluster6 nameserver 1.1.1.1

Verification of the DNS by using ping(8) to test Internet connectivity.

vbhost % for I in $( jot 6 ); do echo; echo "gluster${I}"; ssh gluster${I} host freebsd.org; done

gluster1
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster2
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster3
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster4
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster5
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster6
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

The DNS resolution works properly, now we will switch from the default quarterly pkg(8) repository to the latest one which has more frequent updates as the name suggests. We will need to use sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf command on each FreeBSD machine.

Verification what will be sent to GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done
ssh gluster1 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster2 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster3 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster4 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster5 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster6 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'

Let’s send the command to FreeBSD machines then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh $I 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

As shown below the latest repository is configured in the /etc/pkg/FreeBSD.conf file on each FreeBSD machine.

vbhost % ssh gluster3 tail -7 /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We may now get back to Python.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
ssh gluster1 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster2 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster3 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster4 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster5 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster6 env ASSUME_ALWAYS_YES=yes pkg install python

… and execution on the FreeBSD machines with GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \ 
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/156.0s

The Python packages and its dependencies are installed.

vbhost % ssh gluster3 pkg info
gettext-runtime-0.19.8.1_2     GNU gettext runtime libraries and programs
indexinfo-0.3.1                Utility to regenerate the GNU info page index
libffi-3.2.1_3                 Foreign Function Interface
pkg-1.10.5_5                   Package manager
python-2.7_3,2                 "meta-port" for the default version of Python interpreter
python2-2_3                    The "meta-port" for version 2 of the Python interpreter
python27-2.7.15                Interpreted object-oriented programming language
readline-7.0.5                 Library for editing command lines as they are typed

Now with Ansible Ping module works as desired.

% ansible -i hosts -m ping gluster
gluster1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster6 | SUCCESS => {
"changed": false,
"ping": "pong"
}

GlusterFS Volume Options

GlusterFS has a lot of options to setup the volume. They are described in the GlusterFS Administration Guide in the Setting up GlusterFS Volumes part. Here they are:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

From all of the above still supported the Dispersed volume seems to be the best choice. Like Minio Dispersed volumes are based on erasure codes.

As we have 6 servers we will use 4 + 2 setup which is logical RAID6 against these 6 servers. This means that we will be able to lost 2 of them without service outage. This also means that if we will upload 100 MB file to our volume we will use 150 MB of space across these 6 servers with 25 MB on each node.

We can visualize this as following ASCII diagram.

+-----------+ +-----------+ +-----------+ +-----------+ +-----------+ +-----------+
|  gluster1 | |  gluster2 | |  gluster3 | |  gluster4 | |  gluster5 | |  gluster6 |
|           | |           | |           | |           | |           | |           |
|    brick1 | |    brick2 | |    brick3 | |    brick4 | |    brick5 | |    brick6 |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
      |             |             |             |             |             |
    25|MB         25|MB         25|MB         25|MB         25|MB         25|MB
      |             |             |             |             |             |
      +-------------+-------------+------+------+-------------+-------------+
                                         |
                                      100|MB
                                         |
                                     +---+---+
                                     | file0 |
                                     +-------+

Deploy GlusterFS Cluster

We will use gluster-setup.yml as our Ansible playbook.

Lets create something for the start, for example to always install the latest Python package.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

We will now execute it.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=2    changed=0    unreachable=0    failed=0
gluster2                   : ok=2    changed=0    unreachable=0    failed=0
gluster3                   : ok=2    changed=0    unreachable=0    failed=0
gluster4                   : ok=2    changed=0    unreachable=0    failed=0
gluster5                   : ok=2    changed=0    unreachable=0    failed=0
gluster6                   : ok=2    changed=0    unreachable=0    failed=0

We just installed Python on these machines no update was needed.

As we will be creating cluster we need to add time synchronization between the nodes of the cluster. We will use mose obvious solution – the ntpd(8) daemon that is in the FreeBSD base system. These lines are added to our gluster-setup.yml playbook to achieve this goal

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

After executing the playbook again with the ansible-playbook -i hosts gluster-setup.yml command we will see additional output as the one shown below.

TASK [Enable NTPD Service] ************************************************
changed: [gluster2]
changed: [gluster1]
changed: [gluster4]
changed: [gluster5]
changed: [gluster3]
changed: [gluster6]

TASK [Start NTPD Service] ******************************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster1]
changed: [gluster3]
changed: [gluster6]

Random verification of the NTP service.

vbhost % ssh gluster1 ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.freebsd.pool. .POOL.          16 p    -   64    0    0.000    0.000   0.000
 ntp.ifj.edu.pl  10.0.2.4         3 u    1   64    1  119.956  -345759  32.552
 news-archive.ic 229.30.220.210   2 u    -   64    1   60.533  -345760  21.104

Now we need to install GlusterFS on FreeBSD machines – the glusterfs package.

We will add appropriate section to the playbook.

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

You can add more then one package to the pkgng Ansible module – for example I have also added ncdu package.

You can read more about pkgng Ansible module by typing the ansible-doc pkgng command or at least its short version with -s argument.

vbhost % ansible-doc -s pkgng
- name: Package manager for FreeBSD >= 9.0
  pkgng:
      annotation:            # A comma-separated list of keyvalue-pairs of the form `[=]'. A `+' denotes adding
                               an annotation, a `-' denotes removing an annotation, and `:' denotes
                               modifying an annotation. If setting or modifying annotations, a value
                               must be provided.
      autoremove:            # Remove automatically installed packages which are no longer needed.
      cached:                # Use local package base instead of fetching an updated one.
      chroot:                # Pkg will chroot in the specified environment. Can not be used together with `rootdir' or `jail'
                               options.
      jail:                  # Pkg will execute in the given jail name or id. Can not be used together with `chroot' or `rootdir'
                               options.
      name:                  # (required) Name or list of names of packages to install/remove.
      pkgsite:               # For pkgng versions before 1.1.4, specify packagesite to use for downloading packages. If not
                               specified, use settings from `/usr/local/etc/pkg.conf'. For newer
                               pkgng versions, specify a the name of a repository configured in
                               `/usr/local/etc/pkg/repos'.
      rootdir:               # For pkgng versions 1.5 and later, pkg will install all packages within the specified root directory.
                               Can not be used together with `chroot' or `jail' options.
      state:                 # State of the package. Note: "latest" added in 2.7

You can read more about this particular module on the following – https://docs.ansible.com/ansible/latest/modules/pkgng_module.html – Ansible page.

We will now add GlusterFS nodes to the /etc/hosts file and add autoboot_delay=1 parameter to the /boot/loader.conf file so our systems will boot 9 seconds faster as 10 is the default delay setting.

Here is out gluster-setup.yml Ansible playbook this far.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

  - name: Add Nodes to /etc/hosts File
    blockinfile:
      path: /etc/hosts
      block: |
        10.0.10.11 gluster1
        10.0.10.12 gluster2
        10.0.10.13 gluster3
        10.0.10.14 gluster4
        10.0.10.15 gluster5
        10.0.10.16 gluster6

  - name: Add autoboot_delay to /boot/loader.conf File
    lineinfile:
      path: /boot/loader.conf
      line: autoboot_delay=1
      create: yes

Here is the result of the execution of this playbook.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

TASK [Install Latest GlusterFS Package] ****************************************
ok: [gluster2]
ok: [gluster1]
ok: [gluster3]
ok: [gluster5]
ok: [gluster4]
ok: [gluster6]

TASK [Add Nodes to /etc/hosts File] ********************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster1]
changed: [gluster6]

TASK [Enable GlusterFS Service] ************************************************
changed: [gluster1]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster5]
changed: [gluster6]

TASK [Add autoboot_delay to /boot/loader.conf File] ****************************
changed: [gluster3]
changed: [gluster2]
changed: [gluster5]
changed: [gluster1]
changed: [gluster4]
changed: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=6    changed=3    unreachable=0    failed=0
gluster2                   : ok=6    changed=3    unreachable=0    failed=0
gluster3                   : ok=6    changed=3    unreachable=0    failed=0
gluster4                   : ok=6    changed=3    unreachable=0    failed=0
gluster5                   : ok=6    changed=3    unreachable=0    failed=0
gluster6                   : ok=6    changed=3    unreachable=0    failed=0

Let’s check that FreeBSD machines can now ping each other by names.

vbhost % ssh gluster6 cat /etc/hosts
# LOOPBACK
127.0.0.1      localhost localhost.my.domain
::1            localhost localhost.my.domain

# BEGIN ANSIBLE MANAGED BLOCK
10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6
# END ANSIBLE MANAGED BLOCK

vbhost % ssh gluster1 ping -c 1 gluster3
PING gluster3 (10.0.10.13): 56 data bytes
64 bytes from 10.0.10.13: icmp_seq=0 ttl=64 time=1.924 ms

--- gluster3 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.924/1.924/1.924/0.000 ms

… and our /boot/loader.conf file.

vbhost % ssh gluster4 cat /boot/loader.conf
autoboot_delay=1

Now we need to create directories for GlusterFS data. Without better idea we will use /data directory with /data/colume1 as the directory for volume1 and bricks will be put as /data/volume1/brick1 dirs. In this setup I will use just one brick per server but in production environment you would probably use one brick per physical disk.

Here is the playbook command we will use to create these directories on FreeBSD machines.

  - name: Create brick* Directories for volume1
    raw: mkdir -p /data/volume1/brick` hostname | grep -o -E '[0-9]+' `

After executing it with ansible-playbook -i hosts gluster-setup.yml command the directories has beed created.

vbhost % ssh gluster2 find /data -ls | column -t
2247168  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data
2247169  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data/volume2
2247170  8  drwxr-xr-x  2  root  wheel  512  Dec  28  17:48  /data/volume2/brick2


We now need to add glusterd_enable=YES to the /etc/rc.conf file on GlusterFS nodes and then start the GlsuterFS service.

This is the snippet we will add to our playbook.

  - name: Enable GlusterFS Service
    raw: sysrc glusterd_enable=YES

  - name: Start GlusterFS Service
    service:
      name: glusterd
      state: started

Let’s make quick random verification.

vbhost % ssh gluster4 service glusterd status
glusterd is running as pid 2684.

Now we need to proceed to the last part of the GlusterFS setup – create the volume.

We will do this from the gluster1 – the 1st node of the GlusterFS cluster.

First we need to peer probe other nodes.

gluster1 # gluster peer probe gluster1
peer probe: success. Probe on localhost not needed
gluster1 # gluster peer probe gluster2
peer probe: success.
gluster1 # gluster peer probe gluster3
peer probe: success.
gluster1 # gluster peer probe gluster4
peer probe: success.
gluster1 # gluster peer probe gluster5
peer probe: success.
gluster1 # gluster peer probe gluster6
peer probe: success.

Then we can create the volume. We will need to use force option to because for our example setup we will use directories on the root partition.

gluster1 # gluster volume create volume1 \
             disperse-data 4 \
             redundancy 2 \
             transport tcp \
             gluster1:/data/volume1/brick1 \
             gluster2:/data/volume1/brick2 \
             gluster3:/data/volume1/brick3 \
             gluster4:/data/volume1/brick4 \
             gluster5:/data/volume1/brick5 \
             gluster6:/data/volume1/brick6 \
             force
volume create: volume1: success: please start the volume to access data

We can now start the volume1 GlsuerFS volume.

gluster1 # gluster volume start volume1
volume start: volume1: success

gluster1 # gluster volume status volume1
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/data/volume1/brick1         N/A       N/A        N       N/A
Brick gluster2:/data/volume1/brick2         N/A       N/A        N       N/A
Brick gluster3:/data/volume1/brick3         N/A       N/A        N       N/A
Brick gluster4:/data/volume1/brick4         N/A       N/A        N       N/A
Brick gluster5:/data/volume1/brick5         N/A       N/A        N       N/A
Brick gluster6:/data/volume1/brick6         N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       644
Self-heal Daemon on gluster6                N/A       N/A        N       643
Self-heal Daemon on gluster5                N/A       N/A        N       647
Self-heal Daemon on gluster2                N/A       N/A        N       645
Self-heal Daemon on gluster3                N/A       N/A        N       645
Self-heal Daemon on gluster4                N/A       N/A        N       645

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

gluster1 # gluster volume info volume1

Volume Name: volume1
Type: Disperse
Volume ID: 68cf9607-16bc-4550-9b6b-16a5c7656f51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/volume1/brick1
Brick2: gluster2:/data/volume1/brick2
Brick3: gluster3:/data/volume1/brick3
Brick4: gluster4:/data/volume1/brick4
Brick5: gluster5:/data/volume1/brick5
Brick6: gluster6:/data/volume1/brick6
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Here are contents of currently unused/empty brick.

gluster1 # find /data/volume1/brick1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check

The 6-node GlusterFS cluster is now complete and volume1 available to use.

Alternative

The GlusterFS’s documentation Quick Start Guide also suggests using Ansible to deploy and manage GlusterFS with gluster-ansible repository or gluster-ansible-cluster but they have below requirements.

  • Ansible version 2.5 or above.
  • GlusterFS version 3.2 or above.

As GlusterFS on FreeBSD is at 3.11.1 version I did not used them.

FreeBSD Client

We will now use another VirtualBox machine – also based on the same FreeBSD 12.0-RELEASE image – to create FreeBSD Client machine that will mount our volume1 volume.

We will need to install glusterfs package with pkg(8) command. Then we will use mount_glusterfs command to mount the volume. Keep in mind that in order to mount GlusterFS volume the FUSE (fuse.ko kernel module is needed.

client # pkg install glusterfs

client # kldload fuse

client # mount_glusterfs 10.0.10.11:volume1 /mnt

client # echo $?
0

client # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/fuse on /mnt (fusefs, local, synchronous)

client # ls /mnt
ls: /mnt: Socket is not connected

It is mounted but does not work. The solution to this problem is to add appropriate /etc/hosts entries to the GlusterFS nodes.

client # cat /etc/hosts
::1                     localhost localhost.my.domain
127.0.0.1               localhost localhost.my.domain

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Lets mount it again now with needed /etc/hosts entries.

client # umount /mnt

client # mount_glusterfs gluster1:volume1 /mnt

client # ls /mnt
client #

We now have our GlusterFS volume properly mounted and working on the FreeBSD Client machine.

Lets write some file there with dd(8) to see how it works.

client # dd  FILE bs=1m count=100 status=progress
  73400320 bytes (73 MB, 70 MiB) transferred 1.016s, 72 MB/s
100+0 records in
100+0 records out
104857600 bytes transferred in 1.565618 secs (66975227 bytes/sec)

Let’s see how it looks in the brick directory.

gluster1 # ls -lh /data/volume1/brick1
total 25640
drw-------  10 root  wheel   512B Jan  3 18:31 .glusterfs
-rw-r--r--   2 root  wheel    25M Jan  3 18:31 FILE

gluster1 # find /data
/data/
/data/volume1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/xattrop/xattrop-aed814f1-0eb0-46a1-b569-aeddf5048e06
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check
/data/volume1/brick1/.glusterfs/ac
/data/volume1/brick1/.glusterfs/ac/b4
/data/volume1/brick1/.glusterfs/11
/data/volume1/brick1/.glusterfs/11/50
/data/volume1/brick1/.glusterfs/11/50/115043ca-420f-48b5-af05-c9552db2e585
/data/volume1/brick1/FILE

Linux Client

I will also show how to mount GlusterFS volume on the Red Hat clone CentOS in its latest 7.6 incarnation. It will require glusterfs-fuse package installation.

[root@localhost ~]# yum install glusterfs-fuse


[root@localhost ~]# rpm -q --filesbypkg glusterfs-fuse | grep /sbin/mount.glusterfs
glusterfs-fuse            /sbin/mount.glusterfs

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt
Mount failed. Please check the log file for more details.

Similarly like with FreeBSD Client the /etc/hosts entries are needed.

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt

[root@localhost ~]# ls /mnt
FILE

[root@localhost ~]# mount
10.0.10.11:volume1 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

With apropriate /etc/hosts entries it works as desired. We see the FILE file generated fron the FreeBSD Client machine.

GlusterFS Cluster Redundancy

After messing with the volume and creating and deleting various files I also tested its redundancy. In theory this RAID6 equivalent protection should protect us from the loss of two of six servers. After shutdown of two VirtualBox machines the volume is still available and ready to use.

Closing Thougts

Pity that FreeBSD does not provide more modern GlusterFS package as currently only 3.11.1 version is available.

EOF

Highly Available DHCP Server on FreeBSD

Today I would like to share a highly available DHCP server setup on FreeBSD system, but it should be similarly simple on other UNIX and Unix-like systems. I will use the most obvious choice here – the Internet Systems Consortium implementation – ISC DHCP server – available in the FreeBSD Ports and packages as well.

ISC

Since some time ISC is developing a new DHCP server – Kea – with which they intend to eventually replace the ISC DHCP in most server implementations. They also recommend that new implementers consider using Kea instead ISC DHCP and implement ISC DHCP only if Kea does not meet their needs. Kea currently does not include either client or relay for example. Maybe I will make an UPDATE to this post or a separate article some time.

Also Kea got high availability mode just a month ago so if I would be writing this article little earlier then such setup would not be possible with Kea. It also shows how young Kea implementation is thus I would stick to ISC DHCP server for now and ‘watch’ Kea development for the future.

Architecture

Below is the POOR MAN’S ASCII ARCHITECT diagram showing our ISC DHCP setup.

  +-------------+              +-------------+
  | {primary}   |              | {secondary} |
  | DHCPs1      | ==== HA ==== | DHCPs2      |
  | 10.0.10.251 |              | 10.0.10.252 |
  +-------------+              +-------------+
                 \            /
  +------------------------------------------+
  | ADDRESS POOL  10.0.10.x/24  ADDRESS POOL |
  +------------------------------------------+
              \                  /
               +----------------+
               | {DHCP CLIENTS} |
               +----------------+

The setup of each DHCP server node is very simple. Its FreeBSD 11.2-RELEASE installed on a 4 GB GPT partition using UFS for the / filesystem and only 666 MB are used as shown below.

root@DHCPs1:/ # uname -v
FreeBSD 11.2-RELEASE #0 r335510: Fri Jun 22 04:32:14 UTC 2018     root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC 

root@DHCPs1:/ # gpart show
=>     40  8388528  ada0  GPT  (4.0G)
       40     1024     1  freebsd-boot  (512K)
     1064  8386560     2  freebsd-ufs  (4.0G)
  8387624      944        - free -  (472K)

root@DHCPs1:/ # du -smc * | sort -n
0       sys
1       COPYRIGHT
1       dev
1       entropy
1       libexec
1       media
1       mnt
1       net
1       proc
1       root
1       tmp
2       bin
4       etc
7       sbin
8       var
10      rescue
12      lib
128     boot
499     usr
666     total

The 128 MB of RAM is enough for small amount of clients. There is still 32 MB free memory along with 32 MB of Inactive and Buffered memory that can be swapped out. Not to mention that each getty process takes about 2 MB ram and instead of 8 you just only need 1 of them. In other words you would be able to run it even with as low as 64 MB of RAM.

root@DHCPs1:~ # top -b -o res
last pid: 15205;  load averages:  0.13,  0.25,  0.29  up 0+07:39:11    20:03:48
16 processes:  2 running, 14 sleeping

Mem: 1688K Active, 30M Inact, 26M Wired, 3800K Buf, 32M Free
Swap:


  PID USERNAME    THR PRI NICE   SIZE    RES STATE    TIME    WCPU COMMAND
38897 dhcpd         1  20    0 16424K 10724K select   0:00   0.00% dhcpd
30199 root          1  20    0 13160K  8036K RUN      0:00   0.00% sshd
15106 root          1  28    0 12848K  7136K select   0:00   0.00% sshd
53100 root          1  20    0  9180K  5040K select   0:02   0.00% devd
31079 root          1  20    0  7412K  3640K pause    0:00   0.00% csh
15205 root          1  20    0  7916K  3060K RUN      0:00   0.00% top
15960 root          1  20    0  6464K  2480K nanslp   0:00   0.00% cron
69084 root          1  20    0  6412K  2364K select   0:01   0.00% syslogd
28412 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
28188 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
28504 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
28972 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
29736 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
29080 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
30106 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty
29392 root          1  52    0  6408K  2124K ttyin    0:00   0.00% getty



The /etc/rc.conf file for DHCP nodes DHCPs1 and DHCPs2 is the same (besides hostname and address).

root@DHCPs1:/ # cat /etc/rc.conf
hostname=DHCPs1
ifconfig_em0="inet 10.0.10.251/24 up"
sshd_enable=YES
sendmail_enable=NONE
clear_tmp_enable=YES
syslogd_flags="-ss"
dumpdev=NO

The /etc/sysctl.conf and /boot/loader.conf files modifications are not needed.

Now you will have to install the ISC DHCP server, as the current version is 4.4.x the package will be named accordingly – isc-dhcp44-server – lets add it using the pkg(8) command.

root@DHCPs1:/ # pkg update -f -y
The package management tool is not yet installed on your system.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64//quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[nextcloud] Installing pkg-1.10.5...
[nextcloud] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[nextcloud] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01
[nextcloud] Fetching packagesite.txz: 100%    6 MiB 530.8kB/s    00:12
Processing entries: 100%
FreeBSD repository update completed. 31134 packages processed.
All repositories are up to date.
root@DHCPs1:/ # echo ?
0
root@DHCPs1:/ #

Now lets install isc-dhcp44-server package.

root@DHCPs1:/ # pkg install isc-dhcp44-server
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        isc-dhcp44-server: 4.4.1_3 [FreeBSD]

Number of packages to be installed: 1

The process will require 6 MiB more space.

Proceed with this action? [y/N]: y
[1/1] Installing isc-dhcp44-server-4.4.1_3...
===> Creating groups.
Creating group 'dhcpd' with gid '136'.
===> Creating users
Creating user 'dhcpd' with uid '136'.
[1/1] Extracting isc-dhcp44-server-4.4.1_3: 100%
Message from isc-dhcp44-server-4.4.1_3:

****  To setup dhcpd, please edit /usr/local/etc/dhcpd.conf.

****  This port installs the dhcp daemon, but doesn't invoke dhcpd by default.
      If you want to invoke dhcpd at startup, add these lines to /etc/rc.conf:

            dhcpd_enable="YES"                          # dhcpd enabled?
            dhcpd_flags="-q"                            # command option(s)
            dhcpd_conf="/usr/local/etc/dhcpd.conf"      # configuration file
            dhcpd_ifaces=""                             # ethernet interface(s)
            dhcpd_withumask="022"                       # file creation mask

****  If compiled with paranoia support (the default), the following rc.conf
      options are also supported:

            dhcpd_chuser_enable="YES"           # runs w/o privileges?
            dhcpd_withuser="dhcpd"              # user name to run as
            dhcpd_withgroup="dhcpd"             # group name to run as
            dhcpd_chroot_enable="YES"           # runs chrooted?
            dhcpd_devfs_enable="YES"            # use devfs if available?
            dhcpd_rootdir="/var/db/dhcpd"       # directory to run in
            dhcpd_includedir=""       # directory with config-
                                                  files to include

****  WARNING: never edit the chrooted or jailed dhcpd.conf file but
      /usr/local/etc/dhcpd.conf instead which is always copied where
      needed upon startup.

Now update the pkg(8) repository data and install the isc-dhcp44-server package on DHCPs2 node.

The configuration uses single network segment 10.0.10.0/24 for the clients in the range of 10-250 values in the last octet. The parameter split 128 will split the load equally between DHCP server nodes. As this is just example, we will use 1.1.1.1 and 9.9.9.9 DNS servers and ‘domain.com‘ domain. For the record, the split 128 parameter is set only on the primary node – DHCPs1 in our case. As the man dhcpd.conf page suggests we will “use the same master configuration file for both servers, and have a separate file that contains the peer declaration and includes the master file.” as “This will help you to avoid configuration mismatches.”

root@DHCPs1:/ # cat /usr/local/etc/dhcpd.conf
# CORE
failover peer "ha-dhcp" {
  primary;
  address 10.0.10.251;
  port 678;
  peer address 10.0.10.252;
  peer port 678;
  max-response-delay 60;
  max-unacked-updates 10;
  mclt 3600;
  split 128;
  load balance max seconds 3;
}

include "/usr/local/etc/dhcpd.conf.SHARED";
root@DHCPs1:/ # cat /usr/local/etc/dhcpd.conf.SHARED
# CLIENTS
subnet 10.0.10.0 netmask 255.255.255.0 {
  default-lease-time         604800;
  max-lease-time             604800;
  option routers             10.0.10.254;
  option broadcast-address   10.0.10.255;
  option subnet-mask         255.255.255.0;
  option domain-search       "domain.com";
  option domain-name-servers 1.1.1.1,9.9.9.9;

  pool {
    failover peer "ha-dhcp";
    range 10.0.10.10 10.0.10.250;
  }
}

… and the secondary node.

root@DHCPs2:~ # cat /usr/local/etc/dhcpd.conf
# CORE
failover peer "ha-dhcp" {
  secondary;
  address 10.0.10.252;
  port 678;
  peer address 10.0.10.251;
  peer port 678;
  max-response-delay 60;
  max-unacked-updates 10;
  mclt 3600;
  load balance max seconds 3;
}

include "/usr/local/etc/dhcpd.conf.SHARED";
root@DHCPs2:/ # cat /usr/local/etc/dhcpd.conf.SHARED
# CLIENTS
subnet 10.0.10.0 netmask 255.255.255.0 {
  default-lease-time         604800;
  max-lease-time             604800;
  option routers             10.0.10.254;
  option broadcast-address   10.0.10.255;
  option subnet-mask         255.255.255.0;
  option domain-search       "domain.com";
  option domain-name-servers 1.1.1.1,9.9.9.9;

  pool {
    failover peer "ha-dhcp";
    range 10.0.10.10 10.0.10.250;
  }
}

The /usr/local/etc/dhcpd.conf.SHARED file is identical on both nodes.

Now lets start the DHCP server on both nodes.

root@DHCPs1:~ # sysrc dhcpd_enable=YES
dhcpd_enable:  -> YES
root@DHCPs1:~ # service isc-dhcpd start
Starting dhcpd.
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /usr/local/etc/dhcpd.conf
Database file: /var/db/dhcpd/dhcpd.leases
PID file: /var/run/dhcpd/dhcpd.pid
Wrote 122 leases to leases file.
Listening on BPF/em0/08:00:27:3c:ab:c8/10.0.10.0/24
Sending on   BPF/em0/08:00:27:3c:ab:c8/10.0.10.0/24
Sending on   Socket/fallback/fallback-net
failover peer ha-dhcp: I move from normal to startup

… and the same on secondary node.

root@DHCPs2:~ # sysrc dhcpd_enable=YES
dhcpd_enable:  -> YES
root@DHCPs2:~ # service isc-dhcpd onestart
Starting dhcpd.
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /usr/local/etc/dhcpd.conf
Database file: /var/db/dhcpd/dhcpd.leases
PID file: /var/run/dhcpd/dhcpd.pid
Wrote 122 leases to leases file.
Listening on BPF/em0/08:00:27:de:9b:3d/10.0.10.0/24
Sending on   BPF/em0/08:00:27:de:9b:3d/10.0.10.0/24
Sending on   Socket/fallback/fallback-net
failover peer ha-dhcp: I move from communications-interrupted to startup

Now, as the both nodes for the highly available DHCP server are started, lets try to get some DHCP lease on the DHCP client – DHCPc in our example.

root@DHCPc:~ # dhclient em0
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPACK from 10.0.10.251
bound to 10.0.10.131 -- renewal in 302119 seconds.
root@DHCPc:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=9b
        ether 08:00:27:d9:45:96
        hwaddr 08:00:27:d9:45:96
        inet 10.0.10.131 netmask 0xffffff00 broadcast 10.0.10.255
        nd6 options=29
        media: Ethernet autoselect (1000baseT )
        status: active

We can see that the DHCP client –Β DHCPc – got the 10.0.10.131 address.

We can of course set permanent address for it with the host option in the /usr/local/etc/dhcpd.conf.SHARED config file as show below.

The needed ‘addon’ is shown below.

  group
  {
    host DHCPc {
      hardware ethernet 08:00:27:d9:45:96;
      fixed-address 10.0.10.9;
    }
  }

It needs to be added on both nodes in the /usr/local/etc/dhcpd.conf.SHARED config file, here is how the new shared config file would look like.

root@DHCPs1:~ # cat /usr/local/etc/dhcpd.conf.SHARED
# CLIENTS
subnet 10.0.10.0 netmask 255.255.255.0 {
  default-lease-time         604800;
  max-lease-time             604800;
  option routers             10.0.10.254;
  option broadcast-address   10.0.10.255;
  option subnet-mask         255.255.255.0;
  option domain-search       "domain.com";
  option domain-name-servers 1.1.1.1,9.9.9.9;

  group
  {
    host DHCPc {
      hardware ethernet 08:00:27:d9:45:96;
      fixed-address 10.0.10.9;
    }
  }

  pool {
    failover peer "ha-dhcp";
    range 10.0.10.10 10.0.10.250;
  }
}

Now copy the /usr/local/etc/dhcpd.conf.SHARED file to the second node.

Lets try again to get the address from the same DHCP client.

root@DHCPc:~ # pkill dhclient
root@DHCPc:~ # service netif restart
root@DHCPc:~ # dhclient em0
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPACK from 10.0.10.252
bound to 10.0.10.131 -- renewal in 1665 seconds.
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPNAK from 10.0.10.252
DHCPDISCOVER on em0 to 255.255.255.255 port 67 interval 3
DHCPOFFER from 10.0.10.251
DHCPOFFER from 10.0.10.252
DHCPOFFER already seen.
DHCPREQUEST on em0 to 255.255.255.255 port 67
DHCPACK from 10.0.10.252
bound to 10.0.10.9 -- renewal in 302400 seconds.
root@DHCPc:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=9b
        ether 08:00:27:d9:45:96
        hwaddr 08:00:27:d9:45:96
        inet 10.0.10.9 netmask 0xffffff00 broadcast 10.0.10.255
        nd6 options=29
        media: Ethernet autoselect (1000baseT )
        status: active

Now we got the permanent 10.0.10.9 address.

You can now experiment with these values in the /etc/rc.conf file:

  • dhcpd_flags
  • dhcpd_ifaces
  • dhcpd_withumask
  • dhcpd_chuser_enable
  • dhcpd_withuser
  • dhcpd_withgroup
  • dhcpd_chroot_enable
  • dhcpd_devfs_enable
  • dhcpd_rootdir
  • dhcpd_includedirnclude

… with the all other possible options from the man dhcpd.conf page πŸ™‚

EOF

Bareos Backup Server on FreeBSD

Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula?

bareos-logo

If you are interested in more enterprise backup solution then check IBM TSM (Spectrum Protect) on Veritas Cluster Server article.

Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages.

I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group).

bareos-sched-levels-256.png

This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that.

The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result.

bareos-sched-cross-256.png

Not bad for my taste.

Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine:

  • bareos-dir
  • bareos-sd
  • bareos-webui
  • bareos-fd

I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares.

To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers.

Also this diagram may be useful for You to get some grip into the Bareos world.

bareos-overview-small

System

As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.

freebsd-nakatomi.jpg

Sorry couldn’t resist πŸ™‚

Here are 3 most important configuration files after some time in vi(1) with them.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
# postgresql_enable=YES
# postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

As You can see all ‘core’ services for Bareos are currently disabled on purpose. We will enable them later.

Parameters and modules to be set at boot.

root@replica:~ # cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=2
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# MODULES
  zfs_load=YES

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024

Parameters to be set at runtime.

root@replica:~ # cat /etc/sysctl.conf
# SECURITY
  security.bsd.see_other_uids=0
  security.bsd.see_other_gids=0
  security.bsd.unprivileged_read_msgbuf=0
  security.bsd.unprivileged_proc_debug=0
  security.bsd.stack_guard_page=1
  kern.randompid=9100

# ZFS
  vfs.zfs.min_auto_ashift=12

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0
  kern.vt.enable_bell=0

# IPC
  kern.ipc.shmall=524288
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

After install we will disable the /zroot mounting.

root@replica:/ # zfs set mountpoint=none zroot

As we have sendmail(8) disabled we will need to take care of its queue.

root@replica:~ # cat > /etc/cron.d/sendmail-clean-clientmqueue << __EOF
# CLEAN SENDMAIL
0 * * * * root /bin/rm -r -f /var/spool/clientmqueue/*
__EOF

Assuming the NFS servers configured in the /etc/hosts file the ‘complete’ /etc/hosts file would look like that.

root@replica:~ # grep '^[^#]' /etc/hosts
::1        localhost localhost.my.domain
127.0.0.1  localhost localhost.my.domain
10.0.10.40 replica.backup.org replica
10.0.10.50 nfs-pri.backup.org nfs-pri
10.0.20.50 nfs-sec.backup.org nfs-sec

Lets verify outside world connectivity – needed for adding the Bareos packages.

root@replica:~ # nc -v bareos.org 443
Connection to bareos.org 443 port [tcp/https] succeeded!
^C
root@replica:~ #

Packages

As we want the latest packages we will modify the /etc/pkg/FreeBSD.conf – the pkg(8) repository file for the latest packages.

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

root@replica:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@replica:~ # grep '^[^#]' /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We will use Bareos packages from pkg(8) as they are available, no need to waste time and power on compilation.

root@replica:~ # pkg search bareos
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
(...)
bareos-bat-16.2.7              Backup archiving recovery open sourced (GUI)
bareos-client-16.2.7           Backup archiving recovery open sourced (client)
bareos-client-static-16.2.7    Backup archiving recovery open sourced (static client)
bareos-docs-16.2.7             Bareos document set (PDF)
bareos-server-16.2.7           Backup archiving recovery open sourced (server)
bareos-traymonitor-16.2.7      Backup archiving recovery open sourced (traymonitor)
bareos-webui-16.2.7            PHP-Frontend to manage Bareos over the web

Now we will install Bareos along with all needed components for its environment.

root@replica:~ # pkg install \
  bareos-client bareos-server bareos-webui postgresql95-server nginx \
  php56 php56-xml php56-session php56-simplexml php56-gd php56-ctype \
  php56-mbstring php56-zlib php56-tokenizer php56-iconv php56-mcrypt \
  php56-pear-DB_ldap php56-zip php56-dom php56-sqlite3 php56-gettext \
  php56-curl php56-json php56-opcache php56-wddx php56-hash php56-soap

The bareos, pgsql and www users have been added by pkg(8) along with their packages.

root@replica:~ # id bareos
uid=997(bareos) gid=997(bareos) groups=997(bareos)

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql)

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www)

PostgreSQL

First we will setup the PostgreSQL database.

We will add separate pgsql login class for PostgreSQL database user.

root@replica:~ # cat >> /etc/login.conf << __EOF
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

__EOF

This is one of the rare occasions when I would appreciate the -p flag from the AIX grep command to display whole paragraph πŸ˜‰

root@replica:~ # grep -B 1 -A 3 pgsql /etc/login.conf
# PostgreSQL
pgsql:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

Lets reload the login database.

root@replica:~ # cap_mkdb /etc/login.conf

Here are PostgreSQL rc(8) startup script ‘options’ that can be set in /etc/rc.conf file.

root@replica:~ # grep '#  postgresql' /usr/local/etc/rc.d/postgresql
#  postgresql_enable="YES"
#  postgresql_data="/usr/local/pgsql/data"
#  postgresql_flags="-w -s -m fast"
#  postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"
#  postgresql_class="default"
#  postgresql_profiles=""

We only need postgresql_enable and postgresql_class to be set.

We will enable them now in the /etc/rc.conf file.

root@replica:~ # grep -A 10 BAREOS /etc/rc.conf
# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
# bareos_dir_enable=YES
# bareos_sd_enable=YES
# bareos_fd_enable=YES
# php_fpm_enable=YES
# nginx_enable=YES

We will now init the PostgreSQL database for Bareos.

root@replica:~ # /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

… and start it.

root@replica:~ # /usr/local/etc/rc.d/postgresql start
LOG:  ending log output to stderr
HINT:  Future log output will go to log destination "syslog".

We will now take care of the Bareos server configuration. There are a lot *.sample files that we do not need. We also need to take care about permissions.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'
root@replica:~ # find /usr/local/etc/bareos -name \*\.sample -delete

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

For the ‘trace’ of our changes we will keep a copy of the original configuration to track what we have changed in the process of configuring our Bareos environment.

root@replica:~ # cp -a /usr/local/etc/bareos /usr/local/etc/bareos.ORG

Now, we would configure the Bareos Catalog in the /usr/local/etc/bareos.ORG/bareos-dir.d/catalog/MyCatalog.conf file, here are its contents after our modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Lets make sure that pgsql and www users are in the bareos group, to read its configuration files.

root@replica:~ # pw groupmod bareos -m pgsql

root@replica:~ # id pgsql
uid=70(pgsql) gid=70(pgsql) groups=70(pgsql),997(bareos)

root@replica:~ # pw groupmod bareos -m www

root@replica:~ # id www
uid=80(www) gid=80(www) groups=80(www),997(bareos)

Now, we will prepare the PostgreSQL database for out Bareos instance. We will use scripts provided by the Bareos package from the /usr/local/lib/bareos/scripts path.

root@replica:~ # su - pgsql

$ whoami
pgsql

$ /usr/local/lib/bareos/scripts/create_bareos_database
Creating postgresql database
CREATE DATABASE
ALTER DATABASE
Database encoding OK
Creating of bareos database succeeded.

$ /usr/local/lib/bareos/scripts/make_bareos_tables
Making postgresql tables
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
ALTER TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
DELETE 0
INSERT 0 1
Creation of Bareos PostgreSQL tables succeeded.

$ /usr/local/lib/bareos/scripts/grant_bareos_privileges
Granting postgresql tables
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
Privileges for user bareos granted ON database bareos.

We can now verify that we have the needed database created.

root@replica:~ # su -m bareos -c 'psql -l'
                             List of databases
   Name    | Owner | Encoding  | Collate |    Ctype    | Access privileges 
-----------+-------+-----------+---------+-------------+-------------------
 bareos    | pgsql | SQL_ASCII | C       | C           | 
 postgres  | pgsql | UTF8      | C       | en_US.UTF-8 | 
 template0 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
 template1 | pgsql | UTF8      | C       | en_US.UTF-8 | =c/pgsql         +
           |       |           |         |             | pgsql=CTc/pgsql
(4 rows)

We will also add housekeeping script for PostgreSQL database and put it into crontab(1).

root@replica:~ # su - pgsql

$ whoami
pgsql

$ cat > /usr/local/pgsql/vacuum.sh  /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null
__EOF

$ chmod +x /usr/local/pgsql/vacuum.sh

$ cat /usr/local/pgsql/vacuum.sh
#! /bin/sh

/usr/local/bin/vacuumdb -a -z 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -a   1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s   1> /dev/null 2> /dev/null

$ crontab -e

$ exit

root@replica:~ # cat /var/cron/tabs/pgsql
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.Be9j9VVCUa installed on Thu Apr 26 21:45:04 2018)
# (Cron version -- $FreeBSD$)
0 0 * * * /usr/local/pgsql/vacuum.sh

root@replica:~ # su -m pgsql -c 'crontab -l'
0 0 * * * /usr/local/pgsql/vacuum.sh

Storage

I assume that the primary storage would be mounted in the /bareos directory from one NFS server while Disaster Recovery site would be mounted as /bareos-dr from another NFS server. Below is example NFS configuration of these mount points.

root@replica:~ # mkdir /bareos /bareos-dr

root@replica:~ # mount -t nfs
nfs-pri.backup.org:/export/bareos on /bareos (nfs, noatime)
nfs-sec.backup.org:/export/bareos-dr on /bareos-dr (nfs, noatime)

root@replica:~ # cat >> /etc/fstab << __EOF
#DEV                                  #MNT        #FS  #OPTS                                                         #DP
nfs-pri.backup.org:/export/bareos     /bareos     nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
nfs-sec.backup.org:/export/bareos-dr  /bareos-dr  nfs  rw,noatime,rsize=1048576,wsize=1048576,readahead=4,soft,intr  0 0
__EOF

root@replica:~ # mkdir -p /bareos/bootstrap
root@replica:~ # mkdir -p /bareos/restore
root@replica:~ # mkdir -p /bareos/storage/FileStorage

root@replica:~ # mkdir -p /bareos-dr/bootstrap
root@replica:~ # mkdir -p /bareos-dr/restore
root@replica:~ # mkdir -p /bareos-dr/storage/FileStorage

root@replica:~ # chown -R bareos:bareos /bareos /bareos-dr

root@replica:~ # find /bareos /bareos-dr -ls | column -t
69194  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:42  /bareos
72239  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/restore
72240  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:42  /bareos/storage
72241  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/storage/FileStorage
72238  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos/bootstrap
69195  1  drwxr-xr-x  5  bareos  bareos  5  Apr  27  00:43  /bareos-dr
72254  1  drwxr-xr-x  3  bareos  bareos  3  Apr  27  00:43  /bareos-dr/storage
72255  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:43  /bareos-dr/storage/FileStorage
72253  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/restore
72252  1  drwxr-xr-x  2  bareos  bareos  2  Apr  27  00:42  /bareos-dr/bootstrap

Bareos

As we already used BAREOS-DATABASE-PASSWORD for the bareos user on PostgreSQL’s Bareos database we will use these passwords for the remaining parts of the Bareos subsystems. I think that these passwords are self explaining for what Bareos components they are πŸ™‚

  • BAREOS-DATABASE-PASSWORD
  • BAREOS-DIR-PASSWORD
  • BAREOS-SD-PASSWORD
  • BAREOS-FD-PASSWORD
  • BAREOS-MON-PASSWORD
  • ADMIN-PASSWORD

We will now configure all these Bareos subsystems.

We already modified the MyCatalog.conf file, here are its contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf
Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbpassword = "BAREOS-DATABASE-PASSWORD"
}

Contents of the /usr/local/etc/bareos/bconsole.d/bconsole.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bconsole.d/bconsole.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = replica.backup.org
  address = localhost
  Password = "BAREOS-DIR-PASSWORD"
  Description = "Bareos Console credentials for local Director"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  QueryFile = "/usr/local/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 100
  Password = "BAREOS-DIR-PASSWORD"
  Messages = Daemon
  Auditing = yes

  # Enable the Heartbeat if you experience connection losses
  # (eg. because of your router or firewall configuration).
  # Additionally the Heartbeat can be enabled in bareos-sd and bareos-fd.
  #
  # Heartbeat Interval = 1 min

  # remove comment in next line to load dynamic backends from specified directory
  # Backend Directory = /usr/local/lib

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all director plugins (*-dir.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/RestoreFiles.conf
Job {
  Name = "RestoreFiles"
  Description = "Standard Restore."
  Type = Restore
  Client = Default
  FileSet = "SelfTest"
  Storage = File
  Pool = BR-MO
  Messages = Standard
  Where = /bareos/restore
  Accurate = yes
}

New /usr/local/etc/bareos/bareos-dir.d/client/Default.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/Default.conf
Client {
  Name = Default
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

New /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/replica.backup.org.conf
Client {
  Name = replica.backup.org
  Description = "Client resource of the Director itself."
  address = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/job/BackupCatalog.conf
Job {
  Name = "BackupCatalog"
  Description = "Backup the catalog database (after the nightly save)"
  JobDefs = "DefaultJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"

  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl 
  RunBeforeJob = "/usr/local/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"

  # This deletes the copy of the catalog
  RunAfterJob  = "/usr/local/lib/bareos/scripts/delete_catalog_backup"

  # This sends the bootstrap via mail for disaster recovery.
  # Should be sent to another system, please change recipient accordingly
  Write Bootstrap = "|/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \" -s \"Bootstrap for Job %j\" root@localhost" # (#01)
  Priority = 11                   # run after main backup
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Standard.conf
Messages {
  Name = Standard
  Description = "Reasonable message delivery -- send most everything to email address and to the console."
  operatorcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: Intervention needed for %j\" %r"
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos: %t %e of %c %l\" %r"
  operator = root@localhost = mount                                 # (#03)
  mail = root@localhost = all, !skipped, !saved, !audit             # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/messages/Daemon.conf
Messages {
  Name = Daemon
  Description = "Message delivery for daemon messages (no job)."
  mailcommand = "/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \\" -s \"Bareos daemon message\" %r"
  mail = root@localhost = all, !skipped, !audit # (#02)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !audit
  append = "/var/log/bareos/bareos-audit.log" = audit
}

Pools

By default Bareos comes with four pools configured, we would not use them so we will delete their configuration files.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/pool
total 14
-rw-rw----  1 bareos  bareos  536 Apr 16 08:14 Differential.conf
-rw-rw----  1 bareos  bareos  512 Apr 16 08:14 Full.conf
-rw-rw----  1 bareos  bareos  534 Apr 16 08:14 Incremental.conf
-rw-rw----  1 bareos  bareos   48 Apr 16 08:14 Scratch.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/pool/*.conf

We will now create two our pools for the DAILY backups and for the MONTHLY backups.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-DAILY-POOL.conf
Pool {
  Name = BR-DA
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 7 days           # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-DA-"             # Volumes will be labeled "BR-DA-"
}

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/pool/BRONZE-MONTHLY-POOL.conf
Pool {
  Name = BR-MO
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 120 days         # How long should the Full Backups be kept? (#06)
  Maximum Volume Bytes = 2G           # Limit Volume size to something reasonable
  Maximum Volumes = 100000            # Limit number of Volumes in Pool
  Label Format = "BR-MO-"             # Volumes will be labeled "BR-MO-"
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

File below is left unchanged.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/schedule/WeeklyCycle.conf
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st sat at 21:00                   # (#04)
  Run = Differential 2nd-5th sat at 21:00       # (#07)
  Run = Incremental mon-fri at 21:00            # (#10)
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Differential
  Client = Default
  FileSet = "SelfTest"
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Pool = BR-DA
  Priority = 10
  Write Bootstrap = "/bareos/bootstrap/%c.bsr"
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/storage/File.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/storage/File.conf
Storage {
  Name = File
  Address = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Device = FileStorage
  Media Type = File
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf file after modifications.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/console/bareos-mon.conf
Console {
  Name = bareos-mon
  Description = "Restricted console used by tray-monitor to get the status of the director."
  Password = "BAREOS-MON-PASSWORD"
  CommandACL = status, .status
  JobACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Catalog.conf
FileSet {
  Name = "Catalog"
  Description = "Backup the catalog dump and Bareos configuration files."
  Include {
    Options {
      signature = MD5
      Compression = lzo
    }
    File = "/var/db/bareos"
    File = "/usr/local/etc/bareos"
  }
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/SelfTest.conf
FileSet {
  Name = "SelfTest"
  Description = "fileset just to backup some files for selftest"
  Include {
    Options {
      Signature   = MD5
      Compression = lzo
    }
    File = "/usr/local/sbin"
  }
}

We do not need bundled LinuxAll.conf and WindowsAllDrives.conf filesets so we will delete them.

root@replica:~ # ls -l /usr/local/etc/bareos/bareos-dir.d/fileset/
total 18
-rw-rw----  1 bareos  bareos  250 Apr 27 02:25 Catalog.conf
-rw-rw----  1 bareos  bareos  765 Apr 16 08:14 LinuxAll.conf
-rw-rw----  1 bareos  bareos  210 Apr 27 02:27 SelfTest.conf
-rw-rw----  1 bareos  bareos  362 Apr 16 08:14 WindowsAllDrives.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/LinuxAll.conf

root@replica:~ # rm -f /usr/local/etc/bareos/bareos-dir.d/fileset/WindowsAllDrives.conf

We will now define two new filesets Windows.conf and UNIX.conf files.

New /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/Windows.conf
FileSet {
  Name = Windows
  Enable VSS = yes
  Include {
    Options {
      Signature = MD5
      Drive Type = fixed
      IgnoreCase = yes
      WildFile = "[A-Z]:/pagefile.sys"
      WildDir  = "[A-Z]:/RECYCLER"
      WildDir  = "[A-Z]:/$RECYCLE.BIN"
      WildDir  = "[A-Z]:/System Volume Information"
      Exclude = yes
      Compression = lzo
    }
    File = /
  }
}

New /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/fileset/UNIX.conf
FileSet {
  Name = "UNIX"
  Include {
    Options {
      Signature = MD5 # calculate md5 checksum per file
      One FS = No     # change into other filessytems
      FS Type = ufs
      FS Type = btrfs
      FS Type = ext2  # filesystems of given types will be backed up
      FS Type = ext3  # others will be ignored
      FS Type = ext4
      FS Type = reiserfs
      FS Type = jfs
      FS Type = xfs
      FS Type = zfs
      noatime = yes
      Compression = lzo
    }
    File = /
  }
  # Things that usually have to be excluded
  # You have to exclude /tmp
  # on your bareos server
  Exclude {
    File = /var/db/bareos
    File = /tmp
    File = /proc
    File = /sys
    File = /var/tmp
    File = /.journal
    File = /.fsck
  }
}

File below is left unchanged.

root@replica: # cat /usr/local/etc/bareos/bareos-dir.d/profile/operator.conf
Profile {
   Name = operator
   Description = "Profile allowing normal Bareos operations."

   Command ACL = !.bvfs_clear_cache, !.exit, !.sql
   Command ACL = !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount
   Command ACL = *all*

   Catalog ACL = *all*
   Client ACL = *all*
   FileSet ACL = *all*
   Job ACL = *all*
   Plugin Options ACL = *all*
   Pool ACL = *all*
   Schedule ACL = *all*
   Storage ACL = *all*
   Where ACL = *all*
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all
  Description = "Send all messages to the Director."
}

We will add /bareos/storage/FileStorage path as out FileStorage place for backups.

Contents of the /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf
Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /bareos/storage/FileStorage
  LabelMedia = yes;                   # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/storage/bareos-sd.conf
Storage {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-SD-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-SD-PASSWORD"
  Description = "Director, who is permitted to contact this storage daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/messages/Standard.conf
Messages {
  Name = Standard
  Director = replica.backup.org = all, !skipped, !restored
  Description = "Send relevant messages to the Director."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf
Director {
  Name = replica.backup.org
  Password = "BAREOS-FD-PASSWORD"
  Description = "Allow the configured Director to access this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/director/bareos-mon.conf
Director {
  Name = bareos-mon
  Password = "BAREOS-MON-PASSWORD"
  Monitor = yes
  Description = "Restricted Director, used by tray-monitor to get the status of this file daemon."
}

Contents of the /usr/local/etc/bareos/bareos-fd.d/client/myself.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.d/client/myself.conf
Client {
  Name = replica.backup.org
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-fd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/local/lib/bareos/plugins
  # Plugin Names = ""

  # if compatible is set to yes, we are compatible with bacula
  # if set to no, new bareos features are enabled which is the default
  # compatible = yes
}

Contents of the /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf file after modifications.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf
Client {
  Name = bareos-fd
  Description = "Client resource of the Director itself."
  Address = localhost
  Password = "BAREOS-FD-PASSWORD"
}

Lets see which files and Bareos components hold which passwords.

root@replica:~ # cd /usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # pwd
/usr/local/etc/bareos

root@replica:/usr/local/etc/bareos # grep -r Password . | sort -k 4 | column -t
./bareos-dir.d/director/bareos-dir.conf:        Password  =  "BAREOS-DIR-PASSWORD"
./bconsole.d/bconsole.conf:                     Password  =  "BAREOS-DIR-PASSWORD"
./bareos-dir.d/client/Default.conf:             Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/bareos-fd.conf:           Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/client/replica.backup.org.conf:  Password  =  "BAREOS-FD-PASSWORD"
./bareos-fd.d/director/bareos-dir.conf:         Password  =  "BAREOS-FD-PASSWORD"
./bareos-dir.d/console/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-fd.d/director/bareos-mon.conf:         Password  =  "BAREOS-MON-PASSWORD"
./bareos-dir.d/storage/File.conf:               Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-dir.conf:         Password  =  "BAREOS-SD-PASSWORD"
./bareos-sd.d/director/bareos-mon.conf:         Password  =  "BAREOS-SD-PASSWORD"

Lets fix the rights after creating all new files.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Bareos WebUI

Now we will add/configure files for the Bareos WebUI interface.

The main Nginx webserver configuration file.

root@replica:~ # cat /usr/local/etc/nginx/nginx.conf
user                 www;
worker_processes     4;
worker_rlimit_nofile 51200;
error_log            /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include           mime.types;
  default_type      application/octet-stream;
  log_format        main '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log        /var/log/nginx/access.log main;
  sendfile          on;
  keepalive_timeout 65;

  server {
    listen       9100;
    server_name  replica.backup.org bareos;
    root         /usr/local/www/bareos-webui/public;

    location / {
      index index.php;
      try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ .php$ {
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_param APPLICATION_ENV production;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
      try_files $uri =404;
    }
  }
}

For the PHP we will modify the bundled config file from package /usr/local/etc/php.ini-production file.

root@replica:~ # cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

root@replica:~ # vi /usr/local/etc/php.ini

We only add the timezone, for my location it is the Europe/Warsaw location.

root@replica:~ # diff -u php.ini-production php.ini
--- php.ini-production  2017-08-12 03:23:36.000000000 +0200
+++ php.ini     2017-09-12 18:50:40.513138000 +0200
@@ -934,6 +934,7 @@
 ; Defines the default timezone used by the date functions
 ; http://php.net/date.timezone
-;date.timezone =
+date.timezone = Europe/Warsaw

 ; http://php.net/date.default-latitude
 ;date.default_latitude = 31.7667

Here is the PHP php-fpm daemon configuration.

root@replica:~ # cat /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
log_level = notice

[www]
user = www
group = www
listen = 127.0.0.1:9000
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = static
pm.max_children = 4
pm.start_servers = 1
pm.min_spare_servers = 0
pm.max_spare_servers = 4
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Rest of the Bareos WebUI configuration.

New /usr/local/etc/bareos/bareos-dir.d/console/admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
Console {
  Name = admin
  Password = ADMIN-PASSWORD
  Profile = webui-admin
}

New /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf file.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf
Profile {
  Name = webui-admin
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
  Plugin Options ACL = *all*
}

You may add other directors here as well.

Modified /usr/local/etc/bareos-webui/directors.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/directors.ini
;------------------------------------------------------------------------------
; Section localhost-dir
;------------------------------------------------------------------------------
[replica.backup.org]
enabled = "yes"
diraddress = "replica.backup.org"
dirport = 9101
catalog = "MyCatalog"

Modified /usr/local/etc/bareos-webui/configuration.ini file.

root@replica:~ # cat /usr/local/etc/bareos-webui/configuration.ini
;------------------------------------------------------------------------------
; SESSION SETTINGS
;------------------------------------------------------------------------------
[session]
timeout=3600

;------------------------------------------------------------------------------
; DASHBOARD SETTINGS
;------------------------------------------------------------------------------
[dashboard]
autorefresh_interval=60000

;------------------------------------------------------------------------------
; TABLE SETTINGS
;------------------------------------------------------------------------------
[tables]
pagination_values=10,25,50,100
pagination_default_value=25
save_previous_state=false

;------------------------------------------------------------------------------
; VARIOUS SETTINGS
;------------------------------------------------------------------------------
[autochanger]
labelpooltype=scratch

Last but not least, we need to set permissions for Bareos WebUI configuration files.

root@replica:~ # chown -R www:www /usr/local/etc/bareos-webui
root@replica:~ # chown -R www:www /usr/local/www/bareos-webui

Logs

Lets create the needed log files and fix their permissions.

root@replica:~ # chown -R bareos:bareos /var/log/bareos
root@replica:~ # :>               /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/php-fpm.log
root@replica:~ # chown -R www:www /var/log/nginx

We will now add rules to the newsyslog(8) log rotate daemon, we do not want our filesystem to fill up don’t we?

As newsyslog does cover the *.conf.d directories we will use them instead of modifying the main /etc/newsyslog.conf configuration file.

root@replica:~ # grep conf\\.d /etc/newsyslog.conf
 /etc/newsyslog.conf.d/*
 /usr/local/etc/newsyslog.conf.d/*

root@replica:~ # mkdir -p /usr/local/etc/newsyslog.conf.d

root@replica:~ # cat > /usr/local/etc/newsyslog.conf.d/bareos << __EOF
# BAREOS
/var/log/php-fpm.log             www:www       640  7     100    @T00  J
/var/log/nginx/access.log        www:www       640  7     100    @T00  J
/var/log/nginx/error.log         www:www       640  7     100    @T00  J
/var/log/bareos/bareos.log       bareos:bareos 640  7     100    @T00  J
/var/log/bareos/bareos-audit.log bareos:bareos 640  7     100    @T00  J
__EOF

Lets verify that newsyslog(8) understands out configuration.

root@replica:~ # newsyslog -v | tail -5
/var/log/php-fpm.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/access.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/nginx/error.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos.log : --> will trim at Tue May  1 00:00:00 2018
/var/log/bareos/bareos-audit.log : --> will trim at Tue May  1 00:00:00 2018

Skel

We now need to create so called Bareos skel files for the rc(8) script to gather all the configuration in one file.

If we do not do that the Bareos services would not stop and we will see an error like that one below.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd onestart
Starting bareos_sd.
27-Apr 02:59 bareos-sd JobId 0: Error: parse_conf.c:580 Failed to read config file "/usr/local/etc/bareos/bareos-sd.conf"
bareos-sd ERROR TERMINATION
parse_conf.c:148 Failed to find config filename.
/usr/local/etc/rc.d/bareos-sd: WARNING: failed to start bareos_sd

Lets create them then …

root@replica:~ # cat > /usr/local/etc/bareos/bareos-dir.conf << __EOF
 @/usr/local/etc/bareos/bareos-dir.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-fd.conf << __EOF
@/usr/local/etc/bareos/bareos-fd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bareos-sd.conf << __EOF
@/usr/local/etc/bareos/bareos-sd.d/*/*
__EOF

root@replica:~ # cat > /usr/local/etc/bareos/bconsole.conf << __EOF
@/usr/local/etc/bareos/bconsole.d/*
__EOF

… and verify their contents.

root@replica:~ # cat /usr/local/etc/bareos/bareos-dir.conf
@/usr/local/etc/bareos/bareos-dir.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-fd.conf
@/usr/local/etc/bareos/bareos-fd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bareos-sd.conf
@/usr/local/etc/bareos/bareos-sd.d/*/*

root@replica:~ # cat /usr/local/etc/bareos/bconsole.conf
@/usr/local/etc/bareos/bconsole.d/*

After all our modification and added files lefs make sure that /usr/local/etc/bareos dir permissions are properly set.

root@replica:~ # chown -R bareos:bareos /usr/local/etc/bareos
root@replica:~ # find /usr/local/etc/bareos -type f -exec chmod 640 {} ';'
root@replica:~ # find /usr/local/etc/bareos -type d -exec chmod 750 {} ';'

Its Alive!

Back to our system settings, we will add service start to the main FreeBSD /etc/rc.conf file.

After the modifications our final /etc/rc.conf file will look as follows.

root@replica:~ # cat /etc/rc.conf
# NETWORK
  hostname=replica.backup.org
  ifconfig_em0="inet 10.0.10.30/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  zfs_enable=YES
  sshd_enable=YES
  nfs_client_enable=YES
  syslogd_flags="-ss"
  sendmail_enable=NONE

# OTHER
  clear_tmp_enable=YES
  dumpdev=NO

# BAREOS
  postgresql_enable=YES
  postgresql_class=pgsql
  bareos_dir_enable=YES
  bareos_sd_enable=YES
  bareos_fd_enable=YES
  php_fpm_enable=YES
  nginx_enable=YES

As PostgreSQL server is already running …

root@replica:~ 	# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 15205)
/usr/local/bin/postgres "-D" "/usr/local/pgsql/data"

… we will now start rest of our Bareos stack services.

First the PHP php-fpm daemon.

root@replica:~ # /usr/local/etc/rc.d/php-fpm start
Performing sanity check on php-fpm configuration:
[27-Apr-2018 02:57:09] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

The Nginx webserver.

root@replica:~ # /usr/local/etc/rc.d/nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Bareos Storage Daemon.

root@replica:~ # /usr/local/etc/rc.d/bareos-sd start
Starting bareos_sd.

Bareos File Daemon also known as Bareos client.

root@replica:~ # /usr/local/etc/rc.d/bareos-fd start
Starting bareos_fd.

… and last but least, the most important daemon of this guide, the Bareos Director.

root@replica:~ # /usr/local/etc/rc.d/bareos-dir start
Starting bareos_dir.

We may now see on what ports our daemons are listening.

root@replica:~ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
bareos   bareos-dir 89823 4  tcp4   *:9101                *:*
root     bareos-fd  73066 3  tcp4   *:9102                *:*
www      nginx      33857 6  tcp4   *:9100                *:*
www      nginx      28675 6  tcp4   *:9100                *:*
www      nginx      20960 6  tcp4   *:9100                *:*
www      nginx      15881 6  tcp4   *:9100                *:*
root     nginx      14388 6  tcp4   *:9100                *:*
www      php-fpm    84047 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    82285 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    80688 0  tcp4   127.0.0.1:9000        *:*
www      php-fpm    74735 0  tcp4   127.0.0.1:9000        *:*
root     php-fpm    70518 8  tcp4   127.0.0.1:9000        *:*
bareos   bareos-sd  5151  3  tcp4   *:9103                *:*
pgsql    postgres   20009 4  tcp4   127.0.0.1:5432        *:*
root     sshd       49253 4  tcp4   *:22                  *:*

In case You wandered in what order these services will start, below is the answer from rc(8) subsystem.

root@replica:~ # rcorder /etc/rc.d/* /usr/local/etc/rc.d/* | grep -E '(bareos|php-fpm|nginx|postgresql)'
/usr/local/etc/rc.d/postgresql
/usr/local/etc/rc.d/php-fpm
/usr/local/etc/rc.d/nginx
/usr/local/etc/rc.d/bareos-sd
/usr/local/etc/rc.d/bareos-fd
/usr/local/etc/rc.d/bareos-dir

We can now access http://replica.backup.org:9100 in our browser.

bareos-webui-01

Its indeed alive, we can now login with admin user and ADMIN-PASSWORD password.

bareos-webui-02-dashboard

As we logged in we see empty Bareos dashboard.

Jobs

Now, to make life easier I have prepared two scripts for adding clients to the Bareos server.

The BRONZE-job.sh and BRONZE-sched.sh for generate Bareos files for new jobs and schedules. We will put them into /root/bin dir for convenience.

root@replica:~ # mkdir /root/bin

Both scripts are available below:

After downloading them please rename them accordingly (WordPress limitation).

root@replica:~ # mv BRONZE-sched.sh.key BRONZE-sched.sh
root@replica:~ # mv BRONZE-job.sh.key   BRONZE-job.sh

Lets make them executable.

root@replica:~ # chmod +x /root/bin/BRONZE-sched.sh
root@replica:~ # chmod +x /root/bin/BRONZE-job.sh

Below is ‘help’ message for each of them.

root@replica:~ # /root/bin/BRONZE-sched.sh 
usage: BRONZE-sched.sh GROUP TIME

example:
  BRONZE-sched.sh 01 21:00
root@replica:~ # /root/bin/BRONZE-job.sh
usage: BRONZE-job.sh GROUP TIME CLIENT TYPE

  GROUP option: 01 | 02 | 03
   TIME option: 00:00 - 23:59
 CLIENT option: FQDN
   TYPE option: UNIX | Windows

example:
  BRONZE-job.sh 01 21:00 CLIENT.domain.com UNIX

Client

For the first client we will use the replica.backup.org client – the server itself.

First use the BRONZE-sched.sh to create new scheduler configuration. The script will echo names of the files it created.

root@replica:~ # /root/bin/BRONZE-sched.sh 01 21:00
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-DAILY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf
/usr/local/etc/bareos/bareos-dir.d/schedule/BRONZE-MONTHLY-01-2100-SCHED.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-UNIX.conf
/usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

We will not use Windows backups for that client in that schedule so we can remove them.

root@replica:~ # rm -f \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-DAILY-01-2100-Windows.conf \
  /usr/local/etc/bareos/bareos-dir.d/jobdefs/BRONZE-MONTHLY-01-2100-Windows.conf

Then use the BRONZE-job.sh to add client and its type to created earlier schedule. Names of the created files will also be echoed to stdout.

root@replica:~ # /root/bin/BRONZE-job.sh 01 21:00 replica.backup.org UNIX
INFO: client DNS check.
INFO: DNS 'A' RECORD: Host replica.backup.org not found: 3(NXDOMAIN)
INFO: DNS 'PTR' RECORD: Host 3\(NXDOMAIN\) not found: 3(NXDOMAIN)
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-DAILY-01-2100-replica.backup.org.conf
/usr/local/etc/bareos/bareos-dir.d/job/BRONZE-MONTHLY-01-2100-replica.backup.org.conf

Now we need to reload the Bareos server configuration.

root@replica:~ # echo reload | bconsole
Connecting to Director localhost:9101
1000 OK: replica.backup.org Version: 16.2.7 (09 October 2017)
Enter a period to cancel a command.
reload
reloaded

Lets see how it looks in the browser. We will run that job, then cancel it and then rerun it again.

bareos-webui-03-clients

Client replica.backup.org is configured.

Lets go to Jobs tab to start its backup Job.

bareos-webui-04-jobs

Message that backup Job has started.

bareos-webui-05

We can see it in running state on Jobs tab.

bareos-webui-06

… and on the Dashboard.

bareos-webui-07

We can also display its messages by clicking on its number.

bareos-webui-08

The Jobs tab after cancelling the first Job and starting it again till completion.

bareos-webui-09

… and the Dashboard after these activities.

bareos-webui-10-dashboard

Restore

Lets restore some data, in Bareos its a breeze as its accessed directly in the browser on the Restore tab.

bareos-webui-11-restore

The Restore Job has started.

bareos-webui-12

The Dashboard after restoration.

bareos-webui-13-dashboard

… and Volumes with our precious data.

bareos-webui-14-volumes

Contents of a Volume.

bareos-webui-15-volumes-backups

Status of our Bareos Director.

bareos-webui-16

… and Director Messages, an equivalent of query actlog from IBM TSM or as they call it recently – IBM Spectrum Protect.

bareos-webui-17-messages

… and Bareos Console (bconsole) directly in the browser. Masterpiece!

bareos-webui-18-console

Confirmation about the restored file.

root@replica:~ # ls -l /tmp/bareos-restores/COPYRIGHT 
-r--r--r--  1 root  wheel  6199 Jul 21  2017 /tmp/bareos-restores/COPYRIGHT

root@replica:~ # sha256 /tmp/bareos-restores/COPYRIGHT /COPYRIGHT | column -t
SHA256  (/tmp/bareos-restores/COPYRIGHT)  =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0
SHA256  (/COPYRIGHT)                      =  79b7aaafa1bc42a1ff03f1f78a667edb9a203dbcadec06aabc875e25a83d23f0

Remote Replica

We have volumes with backup in the /bareos directory, we will now configure rsync(1) to replicate these backups to the /bareos-dr directory, to NFS server in other location.

root@replica:~ # pkg install rsync

The rsync(1) command will look like that.


/usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

We will put that command into the crontab(1) root job.

root@replica:~ # crontab -e

root@replica:~ # crontab -l
0 7 * * * /usr/local/bin/rsync -r -u -l -p -t -S --force --no-whole-file --numeric-ids --delete-after /bareos/ /bareos-dr/

As all backups have finished before 7:00, the end of backup window, we will start replication by then.

Summary

So we have a configured ready to make backups and restore Bareos Backup Server on a FreeBSD operating system. It can be used as an Appliance on any virtualization platform or also on a physical server with local storage resources without NFS shares.

UPDATE 1 – Die Hard Tribute in 9.2-RC3 Loader

The FreeBSD Developers even made a tribute to the Die Hard movie and actually implemented the Nakatomi Socrates screen in the FreeBSD 9.2-RC3 loader as shown on the images below. Unfortunately it has been removed in later FreeBSD 9.2-RC4 and official FreeBSD 9.2-RELEASE versions.

freebsd-9.2-nakatomi-socrates-01

freebsd-9.2-nakatomi-socrates-02

UPDATE 2

The Bareos Backup Server on FreeBSD article was featured in the BSD Now 254 – Bare the OS episode.

Thanks for mentioning!

UPDATE 3 – Additional Permissions

Thanks to Math user who identified the problem I added this paragraph below in proper place to make the HOWTO complete. Without it many Bareos daemons would not start with permissions error.

Here is the added paragraph.

We also need to change permissions for the /var/run and /var/db directories for Bareos.

root@replica:~ # chown -R bareos:bareos /var/db/bareos
root@replica:~ # chown -R bareos:bareos /var/run/bareos

Β 

EOF

Distributed Object Storage with Minio on FreeBSD

Meet Minio.

minio-logo-arch-32

Free and open source distributed object storage server compatible with Amazon S3 v2/v4 API. Offers data protection against hardware failures using erasure code and bitrot detection. Supports highly available distributed setup. Provides confidentiality, integrity and authenticity assurances for encrypted data with negligible performance overhead. Both server side and client side encryption are supported. Below is the image of example Minio setup.

Web

The Minio identifies itself as the ZFS of Cloud Object Storage. This guide will show You how to setup highly available distributed Minio storage on the FreeBSD operating system with ZFS as backend for Minio data. For convenience we will use FreeBSD Jails operating system level virtualization.

Setup

The setup will assume that You have 3 datacenters and assumption that you have two datacenters in whose the most of the data must reside and that the third datacenter is used as a ‘quorum/witness’ role. Distributed Minio supports up to 16 nodes/drives total, so we may juggle with that number to balance data between desired datacenters. As we have 16 drives to allocate resources on 3 sites we will use 7 + 7 + 2 approach here. The datacenters where most of the data must reside have 7/16 ratio while the ‘quorum/witness’ datacenter have only 2/16 ratio. Thanks to built in Minio redundancy we may loose (turn off for example) any one of those machines and our object storage will still be available and ready to use for any purpose.

Jails

First we will create 3 jails for our proof of concept Minio setup, storage1 will have the ‘quorum/witness’ role while storage2 and storage3 will have the ‘data’ role. To distinguish commands I type on the host system and storageX Jail I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host # command

Command on the storageX Jail.

root@storageX:/ # command

First we will create the base Jails for our setup.

host # mkdir -p /jail/BASE /jail/storage1 /jail/storage2 /jail/storage3
host # cd /jail/BASE
host # fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE/base.txz
host # for I in 1 2 3; do echo ${I}; tar --unlink -xpJf /jail/BASE/base.txz -C /jail/storage${I}; done
1
2
3
host #

We will now add Jails configuration the the /etc/jail.conf file.

I have used my laptop for the Jail host. This is why Jail will configured to use the wireless wlan0 interface and 192.168.43.10X addresses.

host # for I in 1 2 3
do
  cat >> /etc/jail.conf << __EOF
storage${I} {
  host.hostname = storage${I}.local;
  ip4.addr = 192.168.43.10${I};
  interface = wlan0;
  path = /jail/storage${I};
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

__EOF
done
host #

Lets verify that /etc/jail.conf file is configured as desired.

host # cat /etc/jail.conf
storage1 {
  host.hostname = storage1.local;
  ip4.addr = 192.168.43.101;
  interface = wlan0;
  path = /jail/storage1;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage2 {
  host.hostname = storage2.local;
  ip4.addr = 192.168.43.102;
  interface = wlan0;
  path = /jail/storage2;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage3 {
  host.hostname = storage3.local;
  ip4.addr = 192.168.43.103;
  interface = wlan0;
  path = /jail/storage3;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

host #

Now we will start our Jails.

host # for I in 1 2 3; do service jail onestart storage${I}; done
Starting jails: storage1.
Starting jails: storage2.
Starting jails: storage3.

Lets see how they work.

host # jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.101  storage1.local                /jail/storage1
     2  192.168.43.102  storage2.local                /jail/storage2
     3  192.168.43.103  storage3.local                /jail/storage3

Now lets add DNS server so they will have Internet connectivity.

host # for I in 1 2 3; do echo nameserver 1.1.1.1 > /jail/storage${I}/etc/resolv.conf; done

We can now install Minio package.

host # for I in 1 2 3; do jexec storage${I} env ASSUME_ALWAYS_YES=yes pkg install -y minio; echo; done
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage1.local] Installing pkg-1.10.5...
[storage1.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage1.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage1.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage1.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage1.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage1.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage2.local] Installing pkg-1.10.5...
[storage2.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage2.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage2.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage2.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage2.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage2.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage3.local] Installing pkg-1.10.5...
[storage3.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage3.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage3.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage3.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage3.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage3.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

host #

Lets verify that Minio package has installed successfully.

host # for I in 1 2 3; do jexec storage${I} which minio; done
/usr/local/bin/minio
/usr/local/bin/minio
/usr/local/bin/minio
host #

Now we will configure /etc/hosts file.

root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF

We will create directories for Minio data.

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done
host # for DIR in 1 2
do
  for I in 1
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done

Lets verify that that our data directories created successfully.

host # for I in 1 2 3
  do
    echo storage${I}
    jexec storage${I} ls -1 / | grep data
    echo
  done


storage1
data1
data2

storage2
data1
data2
data3
data4
data5
data6
data7

storage3
data1
data2
data3
data4
data5
data6
data7

Basic minio command example.

root@storage1:/ # minio
NAME:
  minio - Cloud Storage Server.

DESCRIPTION:
  Minio is an Amazon S3 compatible object storage server. Use it to store photos, videos, VMs, containers, log files, or any blob of data as objects.

USAGE:
  minio [FLAGS] COMMAND [ARGS...]

COMMANDS:
  server   Start object storage server.
  gateway  Start object storage gateway.
  update   Check for a new software update.
  version  Print version.
  
FLAGS:
  --config-dir value, -C value  Path to configuration directory. (default: "/root/.minio")
  --quiet                       Disable startup information.
  --json                        Output server logs and startup information in json format.
  --help, -h                    Show help.
  
VERSION:
  2018-03-19T19:22:06Z

Now we can generate the list of directories on servers to add as argument for Minio.

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n
http://storage1:9000/data1 \
http://storage1:9000/data2 \
http://storage2:9000/data1 \
http://storage2:9000/data2 \
http://storage2:9000/data3 \
http://storage2:9000/data4 \
http://storage2:9000/data5 \
http://storage2:9000/data6 \
http://storage2:9000/data7 \
http://storage3:9000/data1 \
http://storage3:9000/data2 \
http://storage3:9000/data3 \
http://storage3:9000/data4 \
http://storage3:9000/data5 \
http://storage3:9000/data6 \
http://storage3:9000/data7 \

We can as well just write it down by hand of course πŸ™‚

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

This is out list of data directories that we will use to configure Minio in FreeBSD’s main configuration /etc/rc.conf file.

http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7

Now, lets put Minio settings into the /etc/rc.conf file.

root@storageX:~ # cat > /etc/rc.conf << __EOF 
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
__EOF
root@storageX:~ # 
root@storageX:~ # cat /etc/rc.conf
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
root@storageX:~ #

Now we will start and configure Minio for the first time.

On each storageX server run the following set of commands.

host # jexec storage3
root@storage3:~ # 
root@storage3:/ # rm -rf /http:\*
root@storage3:/ # rm -rf /usr/local/etc/minio
root@storage3:/ # rm -rf /data?/* /data?/.minio.sys
root@storage3:/ # touch                /var/log/minio.log
root@storage3:/ # chown    minio:minio /var/log/minio.log
root@storage3:/ # mkdir -p             /usr/local/etc/minio
root@storage3:/ # chown -R minio:minio /usr/local/etc/minio
root@storage3:/ # mkdir -p             /http::
root@storage3:/ # chown -R minio:minio /http::
root@storage3:/ # mkdir -p             /http:
root@storage3:/ # chown -R minio:minio /http:
root@storage3:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.103:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.103:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.103:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage2
root@storage2:~ # 
root@storage2:/ # rm -rf /http:\*
root@storage2:/ # rm -rf /usr/local/etc/minio
root@storage2:/ # rm -rf /data?/* /data?/.minio.sys
root@storage2:/ # touch                /var/log/minio.log
root@storage2:/ # chown    minio:minio /var/log/minio.log
root@storage2:/ # mkdir -p             /usr/local/etc/minio
root@storage2:/ # chown -R minio:minio /usr/local/etc/minio
root@storage2:/ # mkdir -p             /http::
root@storage2:/ # chown -R minio:minio /http::
root@storage2:/ # mkdir -p             /http:
root@storage2:/ # chown -R minio:minio /http:
root@storage2:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.102:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.102:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.102:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage1
root@storage1:~ # 
root@storage1:/ # rm -rf /http:\*
root@storage1:/ # rm -rf /usr/local/etc/minio
root@storage1:/ # rm -rf /data?/* /data?/.minio.sys
root@storage1:/ # touch                /var/log/minio.log
root@storage1:/ # chown    minio:minio /var/log/minio.log
root@storage1:/ # mkdir -p             /usr/local/etc/minio
root@storage1:/ # chown -R minio:minio /usr/local/etc/minio
root@storage1:/ # mkdir -p             /http::
root@storage1:/ # chown -R minio:minio /http::
root@storage1:/ # mkdir -p             /http:
root@storage1:/ # chown -R minio:minio /http:
root@storage1:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.101:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.101:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.101:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Here is how it looks in the xterm terminal.

minio-first-run-setup

We can now verify in the browser that it actually works.

minio-browser-01

Now hit [CTRL]+[C] in each of these windows to stop the Minio cluster.

We will now start Minio with FreeBSD rc(8) subsystem as a service.

root@storage1:/ # service minio start
Starting minio.
root@storage1:/ # cat /var/log/minio.log 
root@storage1:/ # service minio status
minio is running as pid 50309.

Lets check if it works.

root@storage1:/ # ps -U minio
  PID TT  STAT    TIME COMMAND
50308  -  IsJ  0:00.00 daemon: /usr/bin/env[50309] (daemon)
50309  -  IJ   0:00.27 /usr/local/bin/minio -C /usr/local/etc/minio server (...)

Now we will do some basic operations, login into Minio distributed storage, create new bucket and upload some file to it.

minio-browser-02

This is how empty Minio cluster looks like.

minio-browser-03

Select Create Bucket option from the button below.

minio-browser-04-create-bucket

We will use name test for our new bucket.

minio-browser-05-create-bucket

It is created and we can access it.

minio-browser-06-bucket

Lets Upload File using same menu as previously.

minio-browser-07-file-upload

The upload progress shown by Minio.

minio-browser-08-file-upload

File has been indeed uploaded.

minio-browser-09-file-upload

By clicking on it we may access it directly from the browser.

minio-browser-10-file-display

We can also share link to that file by using the File Menu as shown below.

minio-browser-10-file-link

The link creation dialog is shown below.

minio-browser-11-file-link

minio-browser-12-file-link

Lets see how Minio distributes the data – the ThinkPad Design – Spirit and Essence.pdf file in out case – over its data directories spread across the servers.

host # jexec storage1
root@storage1:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage1:/ # exit
host # jexec storage2
root@storage2:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage2:/ # exit
host # jexec storage3
root@storage3:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage3:/ # exit

We can also see what Minio configuration file /usr/local/etc/minio/config.json has been generated.

host # jexec storage1
root@storage1:/ # cat /usr/local/etc/minio/config.json 
{
        "version": "22",
        "credential": {
                "accessKey": "alibaba",
                "secretKey": "0P3NS3S4M3"
        },
        "region": "",
        "browser": "on",
        "domain": "",
        "storageclass": {
                "standard": "",
                "rrs": ""
        },
        "notify": {
                "amqp": {
                        "1": {
                                "enable": false,
                                "url": "",
                                "exchange": "",
                                "routingKey": "",
                                "exchangeType": "",
                                "deliveryMode": 0,
                                "mandatory": false,
                                "immediate": false,
                                "durable": false,
                                "internal": false,
                                "noWait": false,
                                "autoDeleted": false
                        }
                },
                "elasticsearch": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "url": "",
                                "index": ""
                        }
                },
                "kafka": {
                        "1": {
                                "enable": false,
                                "brokers": null,
                                "topic": ""
                        }
                },
                "mqtt": {
                        "1": {
                                "enable": false,
                                "broker": "",
                                "topic": "",
                                "qos": 0,
                                "clientId": "",
                                "username": "",
                                "password": "",
                                "reconnectInterval": 0,
                                "keepAliveInterval": 0
                        }
                },
                "mysql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "dsnString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "nats": {
                        "1": {
                                "enable": false,
                                "address": "",
                                "subject": "",
                                "username": "",
                                "password": "",
                                "token": "",
                                "secure": false,
                                "pingInterval": 0,
                                "streaming": {
                                        "enable": false,
                                        "clusterID": "",
                                        "clientID": "",
                                        "async": false,
                                        "maxPubAcksInflight": 0
                                }
                        }
                },
                "postgresql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "connectionString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "redis": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "address": "",
                                "password": "",
                                "key": ""
                        }
                },
                "webhook": {
                        "1": {
                                "enable": false,
                                "endpoint": ""
                        }
                }
        }

S3FS

We can also mount that test bucket from out distributed Minio object storage cluster as filesystem using the S3FS project. Lets add s3fs package and mount our bucket.

host # pkg install -y fusefs-s3fs

Now we will configure password for our bucket.

host # echo test:alibaba:0P3NS3S4M3 > /root/.passwd-s3fs
host # chmod 600 /root/.passwd-s3fs
host # cat /root/.passwd-s3fs 
test:alibaba:0P3NS3S4M3

Now lets do the actual mount.

host # mkdir /tmp/test
host # s3fs \
  -o allow_other \
  -o use_path_request_style \
  -o url=http://192.168.43.101:9000 \
  -o passwd_file=/root/.passwd-s3fs \
  test /tmp/test

The file ThinkPad Design – Spirit and Essence.pdf that we put through web interface should be here.

host # exa -l /tmp/test
.--------- 10M root 2018-04-16 14:15 ThinkPad Design - Spirit and Essence.pdf

host # file /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf 
/tmp/test/ThinkPad Design - Spirit and Essence.pdf: PDF document, version 1.4

host # stat /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf
3976265496 2 ---------- 1 root wheel 0 10416953 "Jan  1 01:00:00 1970" "Apr 16 14:35:35 2018" "Jan  1 01:00:00 1970" "Jan  1 00:59:59 1970" 4096 20346 0 /tmp/test/ThinkPad Design - Spirit and Essence.pdf

We can now upload other file into that bucket using s3fs mount.

host # cp -v /home/vermaden/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf /tmp/test
/home/vermaden/On the Shortness of Life - Lucius Seneca.pdf -> /tmp/test/On the Shortness of Life - Lucius Seneca.pdf

host # file /tmp/test/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf 
On the Shortness of Life - Lucius Seneca.pdf: PDF document, version 1.4

We can also verify that our file put through s3fs is visible on the web interface.

minio-browser-13-s3fs-upload

Real Hardware

Now, as we have working Proof of Concept for the distributed Minio setup how about putting it on a real hardware for real storage purposes? I would setup a 16 node Minio distributed server on a Supermicro SSG-5018D8-AR12L hardware. Supermicro even suggests using that kind of servers for object storage, here is their white paper on that topic – Object Storage Solution for Data Archive using Supermicro SSG-5018D8-AR12L and OpenIO SDS – but they use OpenIO not Minio for distributed object storage solution.

This server features the Supermicro X10SDV-7TP4F motherboard. This is important as this motherboard officially supports FreeBSD 11.x operating system on their Supermicro OS Compatibility page.

Motherboard specification has these features.

 1 x Intel Xeon D-1537 8-Core / 16-Threads TDP 35W
 4 x UDIMM for up to 128GB ECC RDIMM DDR4 2133MHz
12 x 3.5" SAS2/SATA3 Hot-Swap HDD Bays
 4 x 2.5" Cold-Swap HDD Bays
 1 x Controller Intel SoC for 4 SATA3 (6Gbps) Ports
 1 x Controller Broadcom 2116 for 16 SATA3 (6Gbps) Ports
 1 x Expansion Slot PCI-E 3.0 x8 
 1 x Expansion Slot M.2 PCIe 3.0 x4
 1 x Expansion Slot Mini-PCIe w/ mSATA Support
 2 x 10G SFP+ Port
 2 x 1GbE LAN Port
 2 x External USB 3.0 Port
 1 x Interlal USB 2.0 Port
 2 x 400W High-Rfficiency Redundant Power Supplies

You can configure your own and get approximated price using the Thinkmate site from here:
https://www.thinkmate.com/system/superstorage-server-5018d8-ar12l

I would add this components to the basic setup:

 4 x UDIMM FULL 128 GB ECC RDIMM DDR4
 2 x 240GB Micron 5100 MAX 2.5" SATA 6.0Gb/s SSD
 2 x 7.68TB Micron 5200 ECO Series 2.5" SATA 6.0Gb/s SSD
12 x 12TB SATA 6.0Gb/s 7200RPM 3.5" Hitachi Ultrastarβ„’ He12
 3 x SanDisk Cruzer Fit 32GB USB 3.0

Now, I will use the 3 x SanDisk Cruzer Fit 32GB USB 3.0 disks to install FreeBSD as a ZFS root/boot pool with mirror + spare on these disks. We do not need performance here.

Then, the 12 x 12TB SATA 6.0Gb/s 7200RPM 3.5″ Hitachi Ultrastarβ„’ He12 drives will be used as RAIDZ (RAID5 equivalent in ZFS without the write hole) for the Minio data, wich 11 + 1 setup, which means 11 drives for data and 1 drive for parity. As we can lose HALF of the Minio servers I would not waste 12 TB drive for spare here. Then, I would use 2 x 240GB Micron 5100 MAX 2.5″ SATA 6.0Gb/s SSD in mirror for the ZFS ZIL (ZFS Intent Log) to accelerate writes and 2 x 7.68TB Micron 5200 ECO Series 2.5″ SATA 6.0Gb/s SSD for the ZFS read cache (L2ARC).

The network would be setup on 2 x 10G SFP+ Port with LACP as lagg0 interface so each server would have 20 Gbit connectivity. This will give us a total of 320 Gbit theoretical network throughput.

This setup would give as 132 TB ZFS pool space with 15 TB for read cache and 240 GB for writes for single 1U server. Making the calculations this will give as 2112 TB (more then 2 PB) of space for Minio data.

With Minio algorithm for data redundancy we will have about 1 PB of usable storage space in our 16U Object Storage FreeBSD Appliance.

Not bad for my taste πŸ™‚

UPDATE 1

The Distributed Object Storage with Minio on FreeBSD article was included in the BSD Now 246 – Disclosure episode.

Thanks for mentioning!

EOF

Nextcloud 13 on FreeBSD

Today I would like to share a setup of Nextcloud 13 running on a FreeBSD system. To make things more interesting it would be running inside a FreeBSD Jail. I will not describe the Nextcloud setup itself here as its large enough for several blog posts.

Official Nextcloud 13 documentation recommends following setup:

  • MySQL/MariaDB
  • PHP 7.0 (or newer)
  • Apache 2.4 (with mod_php)

I prefer PostgreSQL database to MySQL/MariaDB and I prefer fast and lean Nginx web server to Apache, so my setup is based on these components:

  • PostgreSQL 10.3
  • PHP 7.2.4
  • Nginx 1.12.2 (with php-fpm)
  • Memcached 1.5.7

The Memcached subsystem is least important, it can be easily changed into something more modern like Redis for example. I prefer not to use any third party tools for FreeBSD Jails management. Not because they are bad or something like that. There are just many choices for good FreeBSD Jails management and I want to provide a GENERIC example for Nextcloud 13 in a Jail, not for a specific management tool.

Host

Lets start with preparing the FreeBSD Host with needed settings. We need to allow using raw sockets in Jails. For the future optional upgrades of the Jail we will also allow using chflags(1) in Jails.

host # cat >> /etc/sysctl.conf << __EOF

# ALLOW JAIL RAW SOCKETS
security.jail.allow_raw_sockets=1

# ALLOW UPGRADES IN JAIL 
security.jail.chflags_allowed=1
__EOF

host # sysctl security.jail.allow_raw_sockets=1
security.jail.allow_raw_sockets: 0 -> 1

host # sysctl security.jail.chflags_allowed=1
security.jail.chflags_allowed: 0 -> 1

I would also enable rctl(8) limits for convenient resource limitations on the host system.

host # cat >> /boot/loader.conf << __EOF

# RACCT/RCTL RESOURCE LIMITS
kern.racct.enable=1
__EOF

The complete Jail after finished installation takes less size then 800 MB if You remove not needed parts after installation is finished. With complete FreeBSD Ports tree and current portsnap(8) information it takes about 1.6 GB of space.

  MB PATH                            DESC
1670 /jail/nextcloud                 (complete Nextcloud 13 Jail)
 726 /jail/nextcloud/usr/ports       (can be removed after install)
 178 /jail/nextcloud/var/db/portsnap (can be removed after install)

I have used my laptop for the Jail host. This is why Jail will configured to use the wireless wlan0 interface and 192.168.43.100 address.

To distinguish the commands I type on the host system and nextcloud.local Jail I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host # command

Command on the nextcloud.local Jail.

root@nextcloud:/ # command

Here is the running Jail and its processes.

host # jls
   JID  IP Address      Hostname                      Path
    10  192.168.43.100  nextcloud.local               /jail/nextcloud
host # ps axwww -o %cpu,rss,time,command -J nextcloud
%CPU   RSS    TIME COMMAND
 0.0  2032 0:00.01 /usr/sbin/syslogd -s -s
 0.0  5504 0:00.00 /usr/sbin/sshd
 0.0  2056 0:00.01 /usr/sbin/cron -s
 0.0 24196 0:00.04 postgres: checkpointer process    (postgres)
 0.0 23040 0:00.04 postgres: writer process    (postgres)
 0.0 23036 0:00.07 postgres: wal writer process    (postgres)
 0.0 23328 0:00.06 postgres: autovacuum launcher process    (postgres)
 0.0 12764 0:00.24 postgres: stats collector process    (postgres)
 0.0 23204 0:00.00 postgres: bgworker: logical replication launcher    (postgres)
 0.0 23036 0:00.23 /usr/local/bin/postgres -D /var/db/postgres/data
 0.0  6072 0:00.00 nginx: master process /usr/local/sbin/nginx
 0.0  6548 0:00.00 nginx: worker process (nginx)
 0.0  7604 0:00.15 nginx: worker process (nginx)
 0.0  6548 0:00.00 nginx: worker process (nginx)
 0.0  6544 0:00.00 nginx: worker process (nginx)
 0.0 17600 0:01.25 /usr/local/bin/memcached -l 192.168.43.100 -d -P /var/run/memcached/memcached.pid
 0.0 31372 0:00.01 php-fpm: master process (/usr/local/etc/php-fpm.conf) (php-fpm)
 0.0 31388 0:00.00 php-fpm: pool www (php-fpm)
 0.0 31388 0:00.00 php-fpm: pool www (php-fpm)
 0.0 31388 0:00.00 php-fpm: pool www (php-fpm)
 0.0 31388 0:00.00 php-fpm: pool www (php-fpm)

Jail

First we will prepare the Jail for our Nextcloud 13 installation. Lets create some ZFS datasets for that purpose. I will use my local ZFS pool. I will also create the ZFS dataset for PostgreSQL data with 8k record size.

host # zfs create -o mountpoint=/jail                                                 local/jail
host # zfs create -o mountpoint=/jail/nextcloud                                       local/jail/nextcloud
host # zfs create -o mountpoint=/jail/nextcloud/var/db/postgres/data -o recordsize=8k local/jail/nextcloud/pgsql
host # zfs get -r recordsize local/jail
NAME                        PROPERTY    VALUE    SOURCE
local/jail                  recordsize  128K     default
local/jail/nextcloud        recordsize  128K     default
local/jail/nextcloud/pgsql  recordsize  8K       local

Now lets fetch the FreeBSD base into /jail/nextcloud path.

host # cd /jail/nextcloud
host # fetch -o - http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE/base.txz | tar --unlink -xpJf - -C /jail/nextcloud
-                                             100% of   99 MB  689 kBps 02m28s
host # ls /jail/nextcloud
.cshrc     bin/       COPYRIGHT  etc/       libexec/   mnt/       proc/      root/      sys        usr/
.profile   boot/      dev/       lib/       media/     net/       rescue/    sbin/      tmp/       var/

We have base FreeBSD Jail fetched into /jail/nextcloud path, lets configure host for that Jail.

We will only have one Jail configured (for simplicity), the nextcloud.local Jail.

host # cat >> /etc/jail.conf << __EOF
nextcloud {
  host.hostname = nextcloud.local;
  ip4.addr = 192.168.43.100;
  interface = wlan0;
  path = /jail/nextcloud;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
  allow.sysvipc;
}
__EOF

After creating/modifying the /etc/jail.conf file there should not be any running Jails.

host # jls
   JID  IP Address      Hostname                      Path

Lets enable Jails on the host system.

host # cat >> /etc/rc.conf << __EOF
# JAILS
  jail_enable=YES
__EOF

We can now start out nextcloud.local Jail for the first time.

host # service jail start nextcloud
Starting jails: nextcloud.
host #Β jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.100  nextcloud.local               /jail/nextcloud

Now lets configure the nextcloud.local name on both host and Jail.

host # cat >> /etc/hosts << __EOF

# NEXTCLOUD
192.168.43.100 nextcloud.local nextcloud
__EOF
host # jexec 1 /bin/csh

root@nextcloud:/ # cat >> /etc/hosts << __EOF

# NEXTCLOUD
192.168.43.100 nextcloud.local nextcloud
__EOF

One has to remember that there is no localhost (127.0.0.1) in the FreeBSD Jail. The Jail only has itself configure IP address for listening purposes (192.168.43.100 in our example). This is important because if You configure services on the host that listen on localhost (127.0.0.1) they will work as usual, when You do the same in a FreeBSD Jail you will not able to connect to them (even from this very Jail).

Lets make some basic configuration of the Nextcloud Jail.

host # jexec 1 /bin/csh

root@nextcloud:/ # newaliases -v
WARNING: local host name (nextcloud) is not qualified; see cf/README: WHO AM I?
/etc/mail/aliases: 29 aliases, longest 10 bytes, 297 bytes total

root@nextcloud:/ # cp /usr/share/zoneinfo/Europe/Warsaw /etc/localtime

Lets create basic /etc/rc.conf file for out Jail. I will leave some services commented out as they are not yet configured to run, we do not want to imitate Debian here and start services with default configs πŸ™‚

root@nextcloud:/ # cat >> /etc/rc.conf << __EOF
# DAEMONS | yes
  syslogd_flags="-s -s"
  sshd_enable=YES
# php_fpm_enable=YES
# postgresql_enable=YES
# postgresql_class=postgres
# postgresql_data=/var/db/postgres/data
# memcached_enable=YES
# memcached_flags="-l 192.168.43.100"
# nginx_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  clear_tmp_enable=YES
  clear_tmp_X=YES
  extra_netfs_types=NFS
  dumpdev=NO
  update_motd=NO
  keyrate=fast
__EOF

As we will disable sendmail(8) we need to make sure that the /var/spool/clientmqueue would not fill up with time. Lets configure simple cron job for that.

root@nextcloud:/ # cat > /etc/cron.d/sendmail-clean-clientmqueue << __EOF
 # CLEAN SENDMAIL
 0 * * * * root /bin/rm -r -f /var/spool/clientmqueue/*
 __EOF

As we have some basic configuration lets restart our Jail.

root@nextcloud:/ # exit

host # service jail restart nextcloud
Stopping jails: nextcloud.
Starting jails: nextcloud.

host # jls
   JID  IP Address      Hostname                      Path
     2  192.168.43.100  nextcloud.local               /jail/nextcloud

host # jexec nextcloud /bin/csh

After restart we only have sshd(8) daemon listening for connections.

root@nextcloud:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sshd       97823 3  tcp4   192.168.43.100:22         *:*

Packages

Lets configure network connectivity on the Jail as it will be needed to get the packages from Internet.

root@nextcloud:/ # echo nameserver 1.1.1.1 > /etc/resolv.conf

root@nextcloud:/ # ping -c 3 -t 3 freebsd.org
PING freebsd.org (8.8.178.110): 56 data bytes
64 bytes from 8.8.178.110: icmp_seq=0 ttl=52 time=180.860 ms
64 bytes from 8.8.178.110: icmp_seq=1 ttl=52 time=180.373 ms
64 bytes from 8.8.178.110: icmp_seq=2 ttl=52 time=181.363 ms

--- freebsd.org ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 180.373/180.865/181.363/0.404 ms

As we want the latest packages lets set that in the pkg(8) repository config file.

root@nextcloud:/ # grep quarterly /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/quarterly",

root@nextcloud:/ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@nextcloud:/ # grep latest /etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

Now lets setup pkg(8) and fetch latest repository metadata.

root@nextcloud:/ # pkg update -f
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[nextcloud] Installing pkg-1.10.5...
[nextcloud] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[nextcloud] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01
[nextcloud] Fetching packagesite.txz: 100%    6 MiB 530.8kB/s    00:12
Processing entries: 100%
FreeBSD repository update completed. 31134 packages processed.
All repositories are up to date.

… and up to date FreeBSD Ports tree.

root@nextcloud:/ # portsnap fetch extract
Looking up portsnap.FreeBSD.org mirrors... 6 mirrors found.
Fetching public key from ec2-eu-west-1.portsnap.freebsd.org... done.
Fetching snapshot tag from ec2-eu-west-1.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Fetching snapshot generated at Mon Apr  2 02:06:03 CEST 2018:
7cd019f9e1af8a9d637a56ba3d2bbc2025f54d9931cd8b100% of   79 MB  624 kBps 02m10s
Extracting snapshot... done.
Verifying snapshot integrity...
(...)
Building new INDEX files... done.

root@nextcloud:/ # portsnap fetch update
Looking up portsnap.FreeBSD.org mirrors... 6 mirrors found.
Fetching snapshot tag from ec2-eu-west-1.portsnap.freebsd.org... done.
Ports tree hasn't changed since last snapshot.
No updates needed.
Ports tree is already up to date.

By default Nextcloud 13 package in repository is built with MySQL 5.6 and older PHP 5.6, thus we can not use packages for everything, some (automated) compilation is unavoidable.

root@nextcloud:/ # cd /usr/ports/www/nextcloud

root@nextcloud:/usr/ports/www/nextcloud # make run-depends-list | grep -m 1 php
/usr/ports/lang/php56

root@nextcloud:/usr/ports/www/nextcloud # make run-depends-list | grep -m 1 database
/usr/ports/databases/mysql56-client

Lets check what are the FreeBSD Ports default packages versions.

root@nextcloud:/ # grep -E '^[A-Z]+_DEFAULT' /usr/ports/Mk/bsd.default-versions.mk | column -t
APACHE_DEFAULT?=       2.4
BDB_DEFAULT?=          5
FIREBIRD_DEFAULT?=     2.5
FORTRAN_DEFAULT?=      gfortran
FPC_DEFAULT?=          3.0.4
GCC_DEFAULT?=          6
GHOSTSCRIPT_DEFAULT?=  agpl
LAZARUS_DEFAULT?=      1.8.2
LINUX_DEFAULT?=        c6_64
LINUX_DEFAULT?=        c6
LUA_DEFAULT?=          5.2
MYSQL_DEFAULT?=        5.6
PGSQL_DEFAULT?=        9.5
PHP_DEFAULT?=          5.6
PYTHON_DEFAULT?=       2.7
RUBY_DEFAULT?=         2.4
SAMBA_DEFAULT?=        4.6
SSL_DEFAULT=           base
SSL_DEFAULT:=          ${OPENSSL_INSTALLED:T}
SSL_DEFAULT?=          base
TCLTK_DEFAULT?=        8.6
VARNISH_DEFAULT?=      4

Its PostgreSQL 9.5 and PHP 5.6. We will override that in /etc/make.conf file with the following settings. We will also force using PGSQL option and disable MYSQL option for all Ports.

root@nextcloud:/ # cat >> /etc/make.conf << __EOF
WRKDIRPREFIX=\${PORTSDIR}/obj
DEFAULT_VERSIONS+= php=7.2
DEFAULT_VERSIONS+= pgsql=10
OPTIONS_UNSET+=    MYSQL
OPTIONS_SET+=      PGSQL
__EOF

Now, lets display the default Nextcloud port configuration.

root@nextcloud:/usr/ports/www/nextcloud # make showconfig
===> The following configuration options are available for nextcloud-13.0.0:
     EXIF=on: Image rotation support
     LDAP=on: LDAP protocol support
     SMB=on: SMB network protocol support
     SSL=on: SSL protocol support
====> Database backend(s): you have to choose at least one of them
     MYSQL=on: MySQL database support
     PGSQL=off: PostgreSQL database support
     SQLITE=off: SQLite database support
===> Use 'make config' to modify these settings

PostgreSQL support is not even enabled. Lets configure the Nextcloud port to our needs.

root@nextcloud:/usr/ports/www/nextcloud # make config

nextcloud-06-xterm-make-config
Lets check current port configuration.

root@nextcloud:/usr/ports/www/nextcloud # make showconfig
===> The following configuration options are available for nextcloud-13.0.0:
     EXIF=on: Image rotation support
     LDAP=on: LDAP protocol support
     SMB=on: SMB network protocol support
     SSL=on: SSL protocol support
====> Database backend(s): you have to choose at least one of them
     MYSQL=off: MySQL database support
     PGSQL=on: PostgreSQL database support
     SQLITE=off: SQLite database support
===> Use 'make config' to modify these settings

Good. In case You wandered where these settings are stored below is the answer. Yes, if you delete /var/db/ports/www_nextcloud directory, they will be brought back to defaults without PostgreSQL and with MySQL.

root@nextcloud:/ # cat /var/db/ports/www_nextcloud/options
# This file is auto-generated by 'make config'.
# Options for nextcloud-13.0.0
_OPTIONS_READ=nextcloud-13.0.0
_FILE_COMPLETE_OPTIONS_LIST=EXIF LDAP SMB SSL MYSQL PGSQL SQLITE
OPTIONS_FILE_SET+=EXIF
OPTIONS_FILE_SET+=LDAP
OPTIONS_FILE_SET+=SMB
OPTIONS_FILE_SET+=SSL
OPTIONS_FILE_UNSET+=MYSQL
OPTIONS_FILE_SET+=PGSQL
OPTIONS_FILE_UNSET+=SQLITE

Now lets check again for the run-depends-list after our configuration.

root@nextcloud:/ # make -C /usr/ports/www/nextcloud run-depends-list | grep -m 1 php
/usr/ports/lang/php72

root@nextcloud:/ # make -C /usr/ports/www/nextcloud run-depends-list | grep -m 1 sql
/usr/ports/databases/postgresql10-client

Better.

To make things little faster and to not build everything from FreeBSD Ports we may add most of needed software from packages. We will have to switch for /bin/sh shell for that purpose.

root@nextcloud:/usr/ports/www/nextcloud # exit

host # jexec nextcloud /bin/sh

root@nextcloud:/ # make -C /usr/ports/www/nextcloud run-depends-list | while read I; do echo pkg install -y $( basename $I );done
pkg install -y gettext-runtime
pkg install -y postgresql10-client
pkg install -y pecl-smbclient
pkg install -y php72
pkg install -y php72-bz2
pkg install -y php72-ctype
pkg install -y php72-curl
pkg install -y php72-dom
pkg install -y php72-fileinfo
pkg install -y php72-filter
pkg install -y php72-gd
pkg install -y php72-hash
pkg install -y php72-iconv
pkg install -y php72-json
pkg install -y php72-mbstring
pkg install -y php72-pdo
pkg install -y php72-posix
pkg install -y php72-session
pkg install -y php72-simplexml
pkg install -y php72-xml
pkg install -y php72-xmlreader
pkg install -y php72-xmlwriter
pkg install -y php72-xsl
pkg install -y php72-wddx
pkg install -y php72-zip
pkg install -y php72-zlib
pkg install -y php72-exif
pkg install -y php72-ldap
pkg install -y php72-openssl
pkg install -y php72-pdo_pgsql
pkg install -y php72-pgsql

So we have our list of commands to install most packages, lets paste them one by one into the nextcloud.local prompt. As packages are build, they sometimes get a prefix against what version of ‘upstream’ port it has been built, the ‘pecl-smbclient’ port is a good example here:

root@nextcloud:/ # pkg install -y pecl-smbclient
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
pkg: No packages available to install matching 'pecl-smbclient' have been found in the repositories

root@nextcloud:/ # pkg search pecl-smbclient
php56-pecl-smbclient-0.9.0_3   Smbclient wrapper extension
php70-pecl-smbclient-0.9.0_3   Smbclient wrapper extension
php71-pecl-smbclient-0.9.0_3   Smbclient wrapper extension
php72-pecl-smbclient-0.9.0_3   Smbclient wrapper extension

Here is complete list of packages that we need to install.

root@nextcloud:/ # pkg install -y gettext-runtime
root@nextcloud:/ # pkg install -y postgresql10-client
root@nextcloud:/ # pkg install -y pecl-smbclient
root@nextcloud:/ # pkg install -y php72
root@nextcloud:/ # pkg install -y php72-bz2
root@nextcloud:/ # pkg install -y php72-ctype
root@nextcloud:/ # pkg install -y php72-curl
root@nextcloud:/ # pkg install -y php72-dom
root@nextcloud:/ # pkg install -y php72-fileinfo
root@nextcloud:/ # pkg install -y php72-filter
root@nextcloud:/ # pkg install -y php72-gd
root@nextcloud:/ # pkg install -y php72-hash
root@nextcloud:/ # pkg install -y php72-iconv
root@nextcloud:/ # pkg install -y php72-json
root@nextcloud:/ # pkg install -y php72-mbstring
root@nextcloud:/ # pkg install -y php72-pdo
root@nextcloud:/ # pkg install -y php72-posix
root@nextcloud:/ # pkg install -y php72-session
root@nextcloud:/ # pkg install -y php72-simplexml
root@nextcloud:/ # pkg install -y php72-xml
root@nextcloud:/ # pkg install -y php72-xmlreader
root@nextcloud:/ # pkg install -y php72-xmlwriter
root@nextcloud:/ # pkg install -y php72-xsl
root@nextcloud:/ # pkg install -y php72-wddx
root@nextcloud:/ # pkg install -y php72-zip
root@nextcloud:/ # pkg install -y php72-zlib
root@nextcloud:/ # pkg install -y php72-exif
root@nextcloud:/ # pkg install -y php72-ldap
root@nextcloud:/ # pkg install -y php72-openssl
root@nextcloud:/ # pkg install -y php72-pdo_pgsql
root@nextcloud:/ # pkg install -y php72-pgsql
root@nextcloud:/ # pkg install -y php72-pecl-smbclient
root@nextcloud:/ # pkg install -y nginx
root@nextcloud:/ # pkg install -y memcached
root@nextcloud:/ # pkg install -y portmaster
root@nextcloud:/ # pkg install -y sudo
root@nextcloud:/ # pkg install -y php72-pecl-memcached
root@nextcloud:/ # pkg install -y php72-pcntl
root@nextcloud:/ # pkg install -y postgresql10-server

Some packages like postgresql10-server will remove packages build against the PostgreSQL 9.5 version like below.

root@nextcloud:/ # pkg install postgresql10-server
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (2 conflicting)
  - postgresql10-client-10.3 conflicts with postgresql95-client-9.5.12 on /usr/local/bin/clusterdb
  - postgresql10-client-10.3 conflicts with postgresql95-client-9.5.12 on /usr/local/bin/clusterdb
Checking integrity... done (0 conflicting)
The following 5 package(s) will be affected (of 0 checked):

Installed packages to be REMOVED:
        postgresql95-client-9.5.12
        php72-pdo_pgsql-7.2.4
        php72-pgsql-7.2.4

New packages to be INSTALLED:
        postgresql10-server: 10.3
        postgresql10-client: 10.3

Number of packages to be removed: 3
Number of packages to be installed: 2

The process will require 21 MiB more space.

Proceed with this action? [y/N]: y

Now we need to build the rest.

root@nextcloud:/ # portmaster databases/php72-pgsql databases/php72-pdo_pgsql www/nextcloud www/php72-opcache devel/php72-intl mail/cclient mail/php72-imap math/php72-gmp ftp/php72-ftp
(...)
===>>> The following actions were performed:
        Installation of devel/gmake (gmake-4.2.1_2)
        Installation of devel/gettext-tools (gettext-tools-0.19.8.1)
        Installation of devel/p5-Locale-gettext (p5-Locale-gettext-1.07)
        Installation of misc/help2man (help2man-1.47.6)
        Installation of print/texinfo (texinfo-6.5,1)
        Installation of devel/m4 (m4-1.4.18,1)
        Installation of devel/autoconf-wrapper (autoconf-wrapper-20131203)
        Installation of devel/autoconf (autoconf-2.69_1)
        Installation of databases/php72-pgsql (php72-pgsql-7.2.4)
        Installation of databases/php72-pdo_pgsql (php72-pdo_pgsql-7.2.4)
        Installation of www/nextcloud (nextcloud-13.0.0)
        Installation of www/php72-opcache (php72-opcache-7.2.4)
        Installation of devel/php72-intl (php72-intl-7.2.4)
        Installation of mail/cclient (cclient-2007f_3,1)
        Installation of mail/php72-imap (php72-imap-7.2.4)
        Installation of math/php72-gmp (php72-gmp-7.2.4)
        Installation of ftp/php72-ftp (php72-ftp-7.2.4)

Alternatively to not juggle between packages and ports you may build everything from the FreeBSD Ports tree with command below.

root@nextcloud:/ # portmaster -y www/nextcloud www/nginx databases/memcached security/sudo databases/postgresql10-server www/php72-opcache devel/php72-intl mail/cclient mail/php72-imap math/php72-gmp ftp/php72-ftp

Whichever method you choose, You must end up with these packages installed. This list does not contain dependencies.

root@nextcloud:/ # pkg info | grep -E 'php|nginx|memcached|postgresql|sudo|nextcloud|portmaster'
libmemcached-1.0.18_6          C and C++ client library to the memcached server
memcached-1.5.7                High-performance distributed memory object cache system
nextcloud-13.0.0               Personal cloud which runs on your own server
nginx-1.12.2_11,2              Robust and small WWW server
php72-7.2.4                    PHP Scripting Language
php72-bz2-7.2.4                The bz2 shared extension for php
php72-ctype-7.2.4              The ctype shared extension for php
php72-curl-7.2.4               The curl shared extension for php
php72-dom-7.2.4                The dom shared extension for php
php72-exif-7.2.4               The exif shared extension for php
php72-fileinfo-7.2.4           The fileinfo shared extension for php
php72-filter-7.2.4             The filter shared extension for php
php72-ftp-7.2.4                The ftp shared extension for php
php72-gd-7.2.4                 The gd shared extension for php
php72-gmp-7.2.4                The gmp shared extension for php
php72-hash-7.2.4               The hash shared extension for php
php72-iconv-7.2.4              The iconv shared extension for php
php72-imap-7.2.4               The imap shared extension for php
php72-intl-7.2.4               The intl shared extension for php
php72-json-7.2.4               The json shared extension for php
php72-ldap-7.2.4               The ldap shared extension for php
php72-mbstring-7.2.4           The mbstring shared extension for php
php72-opcache-7.2.4            The opcache shared extension for php
php72-openssl-7.2.4            The openssl shared extension for php
php72-pcntl-7.2.4              The pcntl shared extension for php
php72-pdo-7.2.4                The pdo shared extension for php
php72-pdo_pgsql-7.2.4          The pdo_pgsql shared extension for php
php72-pecl-memcached-3.0.4     PHP extension for interfacing with memcached via libmemcached library
php72-pecl-smbclient-0.9.0_3   Smbclient wrapper extension
php72-pgsql-7.2.4              The pgsql shared extension for php
php72-posix-7.2.4              The posix shared extension for php
php72-session-7.2.4            The session shared extension for php
php72-simplexml-7.2.4          The simplexml shared extension for php
php72-wddx-7.2.4               The wddx shared extension for php
php72-xml-7.2.4                The xml shared extension for php
php72-xmlreader-7.2.4          The xmlreader shared extension for php
php72-xmlwriter-7.2.4          The xmlwriter shared extension for php
php72-xsl-7.2.4                The xsl shared extension for php
php72-zip-7.2.4                The zip shared extension for php
php72-zlib-7.2.4               The zlib shared extension for php
portmaster-3.19_7              Manage your ports without external databases or languages
postgresql10-client-10.3       PostgreSQL database (client)
postgresql10-server-10.3       PostgreSQL is the most advanced open-source database available anywhere
sudo-1.8.22                    Allow others to run commands as root

PostgreSQL Database

Now we have to configure the PostgreSQL database. First lets start with FreeBSD login class.

root@nextcloud:/ # cat >> /etc/login.conf << __EOF
postgres:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:
__EOF

root@nextcloud:/ # grep -A 4 postgres /etc/login.conf
postgres:\
        :lang=en_US.UTF-8:\
        :setenv=LC_COLLATE=C:\
        :tc=default:

root@nextcloud:/ # cap_mkdb /etc/login.conf

Lets make sure that PostgreSQL data directory belongs to the postgres user.

root@nextcloud:/ # chown postgres:postgres /var/db/postgres/data

Lets enable the PostgreSQL service in the /etc/rc.conf file. It will look like that for now.

root@nextcloud:/ # cat /etc/rc.conf
# DAEMONS | yes
  syslogd_flags="-s -s"
  sshd_enable=YES
  postgresql_enable=YES
  postgresql_class=postgres
  postgresql_data=/var/db/postgres/data
# php_fpm_enable=YES
# memcached_enable=YES
# memcached_flags="-l 192.168.43.100"
# nginx_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  clear_tmp_enable=YES
  clear_tmp_X=YES
  extra_netfs_types=NFS
  dumpdev=NO
  update_motd=NO
  keyrate=fast

Now we may initialize and start the database.

root@nextcloud:/ # /usr/local/etc/rc.d/postgresql initdb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/db/postgres/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /var/db/postgres/data -l logfile start

Now lets start it.

root@nextcloud:/ # /usr/local/etc/rc.d/postgresql start
2018-04-03 13:41:49.289 CEST [14522] LOG:  could not create IPv6 socket for address "::1": Protocol not supported
2018-04-03 13:41:49.291 CEST [14522] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-04-03 13:41:49.297 CEST [14522] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-04-03 13:41:49.328 CEST [14522] LOG:  ending log output to stderr
2018-04-03 13:41:49.328 CEST [14522] HINT:  Future log output will go to log destination "syslog".

root@nextcloud:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
postgres postgres   14522 4  tcp4   192.168.43.100:5432   *:*
root     sshd       14178 3  tcp4   192.168.43.100:22     *:*

Ok, its working and listening for connections.

Next we will have to connect to create user and database for the Nextcloud server.

root@nextcloud:/ # psql -h nextcloud.local -U postgres
psql: FATAL:  no pg_hba.conf entry for host "192.168.43.100", user "postgres", database "postgres", SSL off

Remmeber the rule about localhost in a Jail? There is no localhost in a Jail. We have to add 192.168.43.100/32 address to the /var/db/postgres/data/pg_hba.conf file as shown below.

root@nextcloud:/ # grep -C 1 192.168.43.100 /var/db/postgres/data/pg_hba.conf
# IPv4 local connections:
host    all             all             192.168.43.100/32       trust
host    all             all             127.0.0.1/32            trust

Now lets restart the PostgreSQL database.

root@nextcloud:/ # /usr/local/etc/rc.d/postgresql restart
2018-04-03 13:44:01.264 CEST [14692] LOG:  could not create IPv6 socket for address "::1": Protocol not supported
2018-04-03 13:44:01.266 CEST [14692] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-04-03 13:44:01.271 CEST [14692] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-04-03 13:44:01.296 CEST [14692] LOG:  ending log output to stderr
2018-04-03 13:44:01.296 CEST [14692] HINT:  Future log output will go to log destination "syslog".

Now we can connect and create needed user and database for Nextcloud 13 installation.

root@nextcloud:/ # psql -h nextcloud.local -U postgres
psql (10.3)
Type "help" for help.

postgres=# CREATE USER nextcloud WITH PASSWORD '{NEXTCLOUD_DB_PASSWORD}';
CREATE ROLE
postgres=# CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
CREATE DATABASE
postgres=# ALTER DATABASE nextcloud OWNER TO nextcloud;
ALTER DATABASE
postgres-# \q
root@nextcloud:/ #

I will also create ‘daily maintenance’ script for PostgreSQL database.

root@nextcloud:/ # cat >> /var/db/postgres/data/vacuum.sh  /dev/null
/usr/local/bin/reindexdb -a 1> /dev/null 2> /dev/null
/usr/local/bin/reindexdb -s 1> /dev/null 2> /dev/null
__EOF

root@nextcloud:/ # chmod +x /var/db/postgres/data/vacuum.sh
root@nextcloud:/ # chown postgres:postgres /var/db/postgres/data/vacuum.sh

Lets add it as a cron job on the postgres user.

root@nextcloud:/ # su - postgres -c 'crontab -e'
/tmp/crontab.ruG73E5ivZ: 1 lines, 42 characters.
crontab: installing new crontab

root@nextcloud:/ # su - postgres -c 'crontab -l'
0 0 * * * /var/db/postgres/data/vacuum.sh

Nginx Webserver

Now we have to configure Nginx, lets start by creating the self signed certificate. If You do not want to see warnings that this certificate is not signed then You may want to use service such as letsencrypt.org for example.

root@nextcloud:/ # mkdir -p /usr/local/etc/nginx/ssl
root@nextcloud:/ # cd /usr/local/etc/nginx/ssl
root@nextcloud:/usr/local/etc/nginx/ssl # openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout nginx.key -out nginx.crt
Enter pass phrase for server.key: {NEXTCLOUD_SERVER_PASSWORD}
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:PL
State or Province Name (full name) [Some-State]:lodzkie
Locality Name (eg, city) []:Lodz
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Vermaden Enterprises Ltd.
Organizational Unit Name (eg, section) []:Nextcloud Departament
Common Name (e.g. server FQDN or YOUR name) []:nextcloud.local
Email Address []:vermaden@nextcloud.com

root@nextcloud:/ # chmod 400 /usr/local/etc/nginx/ssl/nginx.key
root@nextcloud:/ # ls -l /usr/local/etc/nginx/ssl
total 14
-rw-r--r--  1 root  wheel  2220 Apr  3 14:43 nginx.crt
-rw-------  1 root  wheel  3272 Apr  3 14:43 nginx.key

Lets tak care of rights for Nginx log files.

root@nextcloud:/ # ls -l /var/log | grep nginx
drwxr-xr-x  2 root  wheel        2 Apr  3 01:10 nginx

root@nextcloud:/ # chown -R www:www /var/log/nginx

root@nextcloud:/ # ls -l /var/log/ | grep nginx
drwxr-xr-x  2 www   www          2 Apr  3 01:10 nginx

… and last but not least, the Nginx main configuration file.

root@nextcloud:/ # cat /usr/local/etc/nginx/nginx.conf
user                 www;
worker_processes     4;
worker_rlimit_nofile 51200;
error_log            /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {
  include           mime.types;
  default_type      application/octet-stream;
  log_format        main  '$remote_addr - $remote_user [$time_local] "$request" ';
  access_log        /var/log/nginx/access.log main;
  sendfile          on;
  keepalive_timeout 65;

  upstream php-handler {
    server unix:/var/run/php-fpm.sock;
  }

  server {
    # ENFORCE HTTPS
    listen      80;
    server_name nextcloud.local;
    return      301 https://$server_name$request_uri;
  }

  server {
    listen              443 ssl http2;
    server_name         nextcloud.local;
    ssl_certificate     /usr/local/etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /usr/local/etc/nginx/ssl/nginx.key;

    # HEADERS SECURITY RELATED
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";

    # HEADERS
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;

    # PATH TO THE ROOT OF YOUR INSTALLATION
    root /usr/local/www/nextcloud/;

    location = /robots.txt {
      allow all;
      log_not_found off;
      access_log off;
    }

    location = /.well-known/carddav {
      return 301 $scheme://$host/remote.php/dav;
    }

    location = /.well-known/caldav {
      return 301 $scheme://$host/remote.php/dav;
    }

    # BUFFERS TIMEOUTS UPLOAD SIZES
    client_max_body_size    16400M;
    client_body_buffer_size 1048576k;
    send_timeout            3000;

    # ENABLE GZIP BUT DO NOT REMOVE ETag HEADERS
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    location / {
      rewrite ^ /index.php$uri;
    }

    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
      deny all;
    }

    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
      deny all;
    }

    location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
      fastcgi_split_path_info ^(.+\.php)(/.*)$;
      include fastcgi_params;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_param PATH_INFO $fastcgi_path_info;
      fastcgi_param HTTPS on;
      fastcgi_param modHeadersAvailable true;
      fastcgi_param front_controller_active true;
      fastcgi_pass php-handler;
      fastcgi_intercept_errors on;
      fastcgi_request_buffering off;
      fastcgi_keep_conn       off;
      fastcgi_buffers         16 256K;
      fastcgi_buffer_size     256k;
      fastcgi_busy_buffers_size 256k;
      fastcgi_temp_file_write_size 256k;
      fastcgi_send_timeout    3000s;
      fastcgi_read_timeout    3000s;
      fastcgi_connect_timeout 3000s;
    }

    location ~ ^/(?:updater|ocs-provider)(?:$|/) {
      try_files $uri/ =404;
      index index.php;
    }

    # ADDING THE CACHE CONTROL HEADER FOR JS AND CSS FILES
    # MAKE SURE IT IS BELOW PHP BLOCK
    location ~ \.(?:css|js|woff|svg|gif)$ {
      try_files $uri /index.php$uri$is_args$args;
      add_header Cache-Control "public, max-age=15778463";
      # HEADERS SECURITY RELATED
      # IT IS INTENDED TO HAVE THOSE DUPLICATED TO ONES ABOVE
      add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
      # HEADERS
      add_header X-Content-Type-Options nosniff;
      add_header X-XSS-Protection "1; mode=block";
      add_header X-Robots-Tag none;
      add_header X-Download-Options noopen;
      add_header X-Permitted-Cross-Domain-Policies none;
      # OPTIONAL: DONT LOG ACCESS TO ASSETS
      access_log off;
    }

    location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
      try_files $uri /index.php$uri$is_args$args;
      # OPTIONAL: DONT LOG ACCESS TO OTHER ASSETS
      access_log off;
    }
  }
}

If at any point later You would get following error in the browser then there is a problem between Nginx and PHP (php-fpm) configuration.

502 Bad Gateway
---------------
nginx/1.12.2

PHP

Now we have to configure PHP for our needs. First PostgreSQL related settings.

root@nextcloud:/ # cat /usr/local/etc/php/ext-20-pgsql.ini
extension=pgsql.so

root@nextcloud:/ # cat >> /usr/local/etc/php/ext-20-pgsql.ini << __EOF

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
__EOF

root@nextcloud:/ # cat /usr/local/etc/php/ext-20-pgsql.ini
extension=pgsql.so

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
root@nextcloud:/ # cat /usr/local/etc/php/ext-30-pdo_pgsql.ini
extension=pdo_pgsql.so


root@nextcloud:/ # cat >> /usr/local/etc/php/ext-30-pdo_pgsql.ini << __EOF

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
__EOF

root@nextcloud:/ # cat /usr/local/etc/php/ext-30-pdo_pgsql.ini
extension=pdo_pgsql.so

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0

Lets make sure php-fpm log file exists and has right owner.

root@nextcloud:/ # :> /var/log/php-fpm.log
root@nextcloud:/ # chown www:www /var/log/php-fpm.log

No modifications needed to the /usr/local/etc/php-fpm.conf file.

root@nextcloud:/ # grep '^[^;]' /usr/local/etc/php-fpm.conf
[global]
pid = run/php-fpm.pid
include=/usr/local/etc/php-fpm.d/*.conf

Lets create www profile for php-fpm daemon.

root@nextcloud:/ # cat /usr/local/etc/php-fpm.d/www.conf
[www]
user = www
group = www
listen = /var/run/php-fpm.sock
listen.backlog = -1
listen.owner = www
listen.group = www
listen.mode=0660
pm = static
pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.process_idle_timeout = 1000s;
pm.max_requests = 500
request_terminate_timeout = 0
rlimit_files = 51200
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

… and the main PHP /usr/local/etc/php.ini configuration file.

root@nextcloud:/ # cat /usr/local/etc/php.ini
[PHP]
max_input_time=3600
engine = On
short_open_tag = On
precision = 14
output_buffering = OFF
zlib.output_compression = Off
implicit_flush = Off
unserialize_callback_func =
serialize_precision = 17
disable_functions =
disable_classes =
zend.enable_gc = On
expose_php = On
max_execution_time = 3600
max_input_time = 30000
memory_limit = 1024M
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off
report_memleaks = On
track_errors = Off
html_errors = On
error_log = /var/log/php.log
variables_order = "GPCS"
request_order = "GP"
register_argc_argv = Off
auto_globals_jit = On
post_max_size = 16400M
auto_prepend_file =
auto_append_file =
default_mimetype = "text/html"
default_charset = "UTF-8"
doc_root =
user_dir =
enable_dl = Off
file_uploads = On
upload_max_filesize = 16400M
max_file_uploads = 64
allow_url_fopen = On
allow_url_include = Off
default_socket_timeout = 300
[CLI Server]
cli_server.color = On
[Date]
date.timezone = Europe/Warsaw
[filter]
[iconv]
[intl]
[sqlite3]
[Pcre]
[Pdo]
[Pdo_mysql]
pdo_mysql.cache_size = 2000
pdo_mysql.default_socket=
[Phar]
[mail function]
SMTP = localhost
smtp_port = 25
mail.add_x_header = On
[SQL]
sql.safe_mode = Off
[ODBC]
odbc.allow_persistent = On
odbc.check_persistent = On
odbc.max_persistent = -1
odbc.max_links = -1
odbc.defaultlrl = 4096
odbc.defaultbinmode = 1
[Interbase]
ibase.allow_persistent = 1
ibase.max_persistent = -1
ibase.max_links = -1
ibase.timestampformat = "%Y-%m-%d %H:%M:%S"
ibase.dateformat = "%Y-%m-%d"
ibase.timeformat = "%H:%M:%S"
[MySQLi]
mysqli.max_persistent = -1
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
[mysqlnd]
mysqlnd.collect_statistics = On
mysqlnd.collect_memory_statistics = Off
[OCI8]
[PostgreSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
[bcmath]
bcmath.scale = 0
[browscap]
[Session]
session.save_handler = files
session.save_path = "/tmp"
session.use_strict_mode = 0
session.use_cookies = 1
session.use_only_cookies = 1
session.name = PHPSESSID
session.auto_start = 0
session.cookie_lifetime = 0
session.cookie_path = /
session.cookie_domain =
session.cookie_httponly =
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 1000
session.gc_maxlifetime = 1440
session.referer_check =
session.cache_limiter = nocache
session.cache_expire = 180
session.use_trans_sid = 0
session.hash_function = 0
session.hash_bits_per_character = 5
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"
[Assertion]
zend.assertions = -1
[COM]
[mbstring]
[gd]
[exif]
[Tidy]
tidy.clean_output = Off
[soap]
soap.wsdl_cache_enabled=1
soap.wsdl_cache_dir="/tmp"
soap.wsdl_cache_ttl=86400
soap.wsdl_cache_limit = 5
[sysvshm]
[ldap]
ldap.max_links = -1
[mcrypt]
[dba]
[opcache]
opcache.enable=1
opcache.enable_cli=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1
[curl]
[openssl]

Daemons

Now, we should enable all daemons and start them, here is the final /etc/rc.conf file.

root@nextcloud:/ # cat /etc/rc.conf
# DAEMONS | yes
  syslogd_flags="-s -s"
  sshd_enable=YES
  postgresql_enable=YES
  postgresql_class=postgres
  postgresql_data=/var/db/postgres/data
  php_fpm_enable=YES 
  memcached_enable=YES
  memcached_flags="-l 192.168.43.100"
  nginx_enable=YES

# DAEMONS | no
  sendmail_enable=NONE
  sendmail_submit_enable=NO
  sendmail_outbound_enable=NO
  sendmail_msp_queue_enable=NO

# OTHER
  clear_tmp_enable=YES
  clear_tmp_X=YES
  extra_netfs_types=NFS
  dumpdev=NO
  update_motd=NO
  keyrate=fast

Lets start the services then.

Memcached.

root@nextcloud:/ # /usr/local/etc/rc.d/memcached start
Starting memcached.

The PHP php-fpm daemon.

root@nextcloud:/ # /usr/local/etc/rc.d/php-fpm start
Performing sanity check on php-fpm configuration:
[03-Apr-2018 14:28:22] NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful

Starting php_fpm.

PostgreSQL database sohuld be running already.

root@nextcloud:/ # /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 17751)
/usr/local/bin/postgres "-D" "/var/db/postgres/data"

… and the Nginx webserver.

root@nextcloud:/ # /usr/local/etc/rc.d/nginx start
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Lets see what daemon is listening on what port.

root@nextcloud:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
www      nginx      28583 6  tcp4   192.168.43.100:80     *:*
www      nginx      28583 7  tcp4   192.168.43.100:443    *:*
www      nginx      28582 6  tcp4   192.168.43.100:80     *:*
www      nginx      28582 7  tcp4   192.168.43.100:443    *:*
www      nginx      28581 6  tcp4   192.168.43.100:80     *:*
www      nginx      28581 7  tcp4   192.168.43.100:443    *:*
www      nginx      28580 6  tcp4   192.168.43.100:80     *:*
www      nginx      28580 7  tcp4   192.168.43.100:443    *:*
root     nginx      28579 6  tcp4   192.168.43.100:80     *:*
root     nginx      28579 7  tcp4   192.168.43.100:443    *:*
nobody   memcached  28239 16 tcp4   192.168.43.100:11211  *:*
postgres postgres   28211 4  tcp4   192.168.43.100:5432   *:*
root     sshd       28205 3  tcp4   192.168.43.100:22     *:*

Remember that php-fpm daemon uses /var/run/php-fpm.sock socket.

root@nextcloud:/ # ls -l /var/run/php-fpm.sock
srw-rw----  1 www  www  0 Apr  3 18:27 /var/run/php-fpm.sock

Nextcloud

Now lets prepare the directory for Nextcloud data.

root@nextcloud:/ # mkdir -p /var/db/nextcloud/data
root@nextcloud:/ # chown -R www:www /var/db/nextcloud

We also need to make sure that whole Nextcloud installation directory is owned by www user.

root@nextcloud:/ # chown -R www:www /usr/local/www/nextcloud

Now we should be able to access the Nextcloud 13 using a browser, lets type the https://nextcloud.local/ on the host in the browser of your choice.

nextcloud-00

Viola! Its alive.

Here are last configuration bits etered directly in the browser.

   ADMIN USER: admin
   ADMIN PASS: {NEXTCLOUD_ADMIN_PASSWORD}
  DATA FOLDER: /var/db/nextcloud/data
DATABASE USER: nextcloud
DATABASE PASS: {NEXTCLOUD_DB_PASSWORD} 
DATABASE NAME: nextcloud
DATABASE HOST: nextcloud.local

Here is how it looks in the browser.

nextcloud-01-setup

After we click the Finish setup button we should see the Nextcloud welcome message as shown below.

nextcloud-02-welcome

We may close this message and we will see our files.

nextcloud-03-files

The Nextcloud settings page yelds about lack of cache daemon.

nextcloud-04-nocache

Lets configure our memcached daemon into the Nextcloud configuration file.

nextcloud-05-cache

Here are added lines to the /usr/local/www/nextcloud/config/config.php file.

root@nextcloud:/usr/local/www/nextcloud/config # diff -u config.php.ORG config.php
--- config.php.ORG      2018-04-03 16:39:04.531258000 +0200
+++ config.php  2018-04-03 16:40:01.509956000 +0200
@@ -18,4 +18,14 @@
   'dbuser' => 'nextcloud',
   'dbpassword' => '',
   'installed' => true,
+  'memcache.local' => '\\OC\\Memcache\\Memcached',
+  'memcache.distributed' => '\\OC\\Memcache\\Memcached',
+  'memcached_servers' =>
+  array (
+    0 =>
+    array (
+      0 => 'nextcloud.local',
+      1 => 11211,
+    ),
+  ),
 );

Here is complete Nextcloud 13 main configuration file /usr/local/www/nextcloud/config/config.php after changes.

root@nextcloud:/ # cat /usr/local/www/nextcloud/config/config.php
 'oc70jc009i5e',
  'passwordsalt' => 'anVkM4F5kJwhInurq0N6eq65JmL3xZ',
  'secret' => '2RjnOfiMfrdW6rJEcpxORL39+E1gvS38+sys+G0uI6vZOOSc',
  'trusted_domains' =>
  array (
    0 => 'nextcloud.local',
  ),
  'datadirectory' => '/var/db/nextcloud/data',
  'overwrite.cli.url' => 'https://nextcloud.local',
  'dbtype' => 'pgsql',
  'version' => '13.0.0.14',
  'dbname' => 'nextcloud',
  'dbhost' => 'nextcloud.local',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'nextcloud',
  'dbpassword' => '',
  'installed' => true,
  'memcache.local' => '\\OC\\Memcache\\Memcached',
  'memcache.distributed' => '\\OC\\Memcache\\Memcached',
  'memcached_servers' =>
  array (
    0 =>
    array (
      0 => 'nextcloud.local',
      1 => 11211,
    ),
  )
);

Alternatively You may want to get that config directly from the Nextcloud application using occ command.

root@nextcloud:/ # sudo -u www php /usr/local/www/nextcloud/occ config:list system
{
    "system": {
        "instanceid": "***REMOVED SENSITIVE VALUE***",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "trusted_domains": [
            "nextcloud.local"
        ],
        "datadirectory": "***REMOVED SENSITIVE VALUE***",
        "overwrite.cli.url": "https:\/\/nextcloud.local",
        "dbtype": "pgsql",
        "version": "13.0.0.14",
        "dbname": "***REMOVED SENSITIVE VALUE***",
        "dbhost": "***REMOVED SENSITIVE VALUE***",
        "dbport": "",
        "dbtableprefix": "oc_",
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "installed": true,
        "memcache.local": "\\OC\\Memcache\\Memcached",
        "memcache.distributed": "\\OC\\Memcache\\Memcached",
        "memcached_servers": [
            [
                "nextcloud.local",
                11211
            ]
        ]
    }
}

Logs

To not end with filled /var/log directory with tons of logs we need to configure their rotation.

Here are lines add to the /etc/newsyslog.conf file.

root@nextcloud:/ # cat >> /etc/newsyslog.conf << __EOF
/var/db/nextcloud/data/nextcloud.log     www:www     640  7     *    @T00  JC
/var/log/php-fpm.log                     www:www     640  7     *    @T00  JC
/var/log/nginx/error.log                 www:www     640  7     *    @T00  JC
/var/log/nginx/access.log                www:www     640  7     *    @T00  JC
__EOF

Lets verify the rotation.

root@nextcloud:/ # newsyslog -v | tail -4
/var/db/nextcloud/data/nextcloud.log : --> will trim at Fri Jul 21 00:00:00 2017
/var/log/php-fpm.log : --> will trim at Fri Jul 21 00:00:00 2017
/var/log/nginx/error.log : --> will trim at Fri Jul 21 00:00:00 2017
/var/log/nginx/access.log : --> will trim at Fri Jul 21 00:00:00 2017

Yep. Works like a charm.

Cleanup (Optional)

We may now remove not needed parts of the Jail, for example downloaded packages and distfiles.

root@nextcloud:/ # rm -rf /var/cache/pkg

root@nextcloud:/ # rm -rf /usr/ports/obj

root@nextcloud:/ # rm -rf usr/ports/distfiles

root@nextcloud:/ # pkg autoremove
Checking integrity... done (0 conflicting)
Deinstallation has been requested for the following 8 packages:

Installed packages to be REMOVED:
        autoconf-2.69_1
        autoconf-wrapper-20131203
        gettext-tools-0.19.8.1
        gmake-4.2.1_2
        help2man-1.47.6
        m4-1.4.18,1
        p5-Locale-gettext-1.07
        texinfo-6.5,1

Number of packages to be removed: 8

The operation will free 26 MiB.

Proceed with deinstalling packages? [y/N]: y

You have reached the end, good luck with Your Nextcloud setup πŸ˜‰

UPDATE 1 – SysV IPC in Jails

Since FreeBSD 11.0-RELEASE and FreeBSD 10.4-RELEASE the allow.sysvipc Jail parameter in /etc/jail.conf has been deprecated in favor of sysvmsg/sysvsem/sysvshm parameters. This information is available in man 8 jail manual page for example.

host # man 8 jail
(...)
             allow.sysvipc
                     A process within the jail has access to System V IPC
                     primitives.  This is deprecated in favor of the per-
                     module parameters (see below).  When this parameter is
                     set, it is equivalent to setting sysvmsg, sysvsem, and
                     sysvshm all to β€œinherit”.

(...)

Before this change there was problem with SysV IPC calls because each PostgreSQL server user postgres need to have different UID for each Jail running on that host. Now You can have 100 Jails with each postgres user UID 500 and everything works like a charm.

Its described broadly in this blog post – Postgres in FreeBSD Jailshttps://planet.freebsd.org/brd/2017/11/07/postgres-in-freebsd-jails/.

Using newer approach these are the changes in the configuration in the /etc/jail.conf file.

host # diff -u /etc/jail.conf.OLD /etc/jail.conf
--- /etc/jail.conf.OLD  2018-04-05 13:04:18.556904000 +0200
+++ /etc/jail.conf      2018-04-05 13:04:10.828127000 +0200
@@ -8,5 +8,7 @@
   exec.clean;
   mount.devfs;
   allow.raw_sockets;
-  allow.sysvipc;
+  sysvsem = new;
+  sysvshm = new;
+  sysvmsg = new;
 }

… and the whole /etc/jail.conf file after modification.

host # cat /etc/jail.conf
nextcloud {
  host.hostname = nextcloud.local;
  ip4.addr = 192.168.43.100;
  interface = wlan0;
  path = /jail/nextcloud;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
  sysvsem = new;
  sysvshm = new;
  sysvmsg = new;
}

UPDATE 2 – Setup without Sockets

To run this setup without sockets you may want to modify the PHP php-fpm daemon to listen in IPv4 address instead of using /var/run/php-fpm.sock socket for communication. These are the changes in /usr/local/etc/php-fpm.d/www.conf php-rpm profile and Nginx webserver main configuration /usr/local/etc/nginx/nginx.conf file.

root@nextcloud:/ # diff -u /usr/local/etc/php-fpm.d/www.conf.SOCKET /usr/local/etc/php-fpm.d/www.conf
--- /usr/local/etc/php-fpm.d/www.conf.SOCKET    2018-04-05 12:46:06.351550000 +0200
+++ /usr/local/etc/php-fpm.d/www.conf   2018-04-05 12:46:09.324277000 +0200
@@ -1,7 +1,8 @@
 [www]
 user = www
 group = www
-listen = /var/run/php-fpm.sock
+listen = 192.168.43.100:9000
+listen.allowed_clients = 192.168.43.100
 listen.backlog = -1
 listen.owner = www
 listen.group = www
root@nextcloud:/ # diff -u /usr/local/etc/nginx/nginx.conf.SOCKET /usr/local/etc/nginx/nginx.conf
--- /usr/local/etc/nginx/nginx.conf.SOCKET      2018-04-05 12:42:09.051583000 +0200
+++ /usr/local/etc/nginx/nginx.conf     2018-04-05 12:42:30.491819000 +0200
@@ -16,7 +16,7 @@
   keepalive_timeout 65;
 
   upstream php-handler {
-    server unix:/var/run/php-fpm.sock;
+    server 192.168.43.100:9000;
   }
 
   server {

UPDATE 3 – Nextcloud 13.0.1 Update

After update to latest Nextcloud 13.0.1 the setup is broken. The symptom is that after the login page it keeps redirecting. The fix has been described in the /usr/ports/UPDATING file, here is the message with the fix itself.

20180404:
  AFFECTS: users of www/nextcloud
  AUTHOR: brnrd@FreeBSD.org

  With the 13.0.1 update the path for Apps bundled with the package has
  changed from "apps" to "apps-pkg". You must add an entry to the
  "apps_paths" array in config/config.php of your nextcloud installation,
  a patch for the default installation can be applied with:

  # cd /usr/local/www/nextcloud
  # su -m www -c "php ./occ config:import < /usr/local/share/nextcloud/fix-apps_paths.json"

So to fix the installation after eventual upgrade to 13.0.1 these instructions need to be executed in the Jail.

root@nextcloud:/ # su -m www -c "php /usr/local/www/nextcloud/occ config:import < /usr/local/share/nextcloud/fix-apps_paths.json"

Hope that helps to resolve the issue.

UPDATE 4

The Nextcloud 13 on FreeBSD article was featured in the BSD Now 245 – ZFS User Conf 2018 episode.

Thanks for mentioning!

EOF