Tag Archives: network

FreeBSD Bhyve Virtualization

The Bhyve FreeBSD hypervisor (called/spelled ‘beehive’ usually) was created almost 10 years ago. Right now it offers speed and features that other similar solutions provide – such as KVM/VMware/XEN. You can check all the details in the FreeBSD Handbook for details. One of the last things Bhyve lacks is so called live migration between physical hosts but save state and resume from saved state are in the works currently so not long before that live migration. Up until recently I used mostly VirtualBox for my small virtualization needs. Recently I started to evaluate Bhyve and this time I am very pleased – the FreeBSD VirtualBox integration is not perfect anyway – for example – the USB passthru does not work since several years – and even when it worked – it was limited to USB 1.x speeds only. Also because of FreeBSD policy of pkg(8) packages building process – the VirtualBox packages remain broken for 3 months after each *.1 or upper release (*.2/*.3/…). The other impulse that forced me to switch from VirtualBox to Bhyve was the VirtualBox (in)stability. I often needed to restart crashed VirtualBox VMs because they failed for some unspecified reason.

bhyve-logo

One of the Bhyve features that I especially liked was that by default Bhyve only uses memory that guest system wanted to allocate. For example a FreeBSD virtual machine with 2 GB RAM set will use after boot about 70 MB RAM πŸ™‚

Another great feature I really liked about Bhyve was that I could suspend the host machine with all the VMs started – both on my ThinkPad W520 laptop and AMD Based FreeBSD Desktop and then it all successfully resumed. With VirtualBox you would have to poweroff all VMs because if you suspend with running VirtualBox VMs – it will just crash – its not possible to do suspend/resume cycle with VirtualBox.

The Table of Contents for this article is as follows:

  • FreeBSD Bhyve Virtualization
  • Bhyve Managers
  • Bhyve versus KVM and VMware
  • Bhyve libvirt/virt-manager GUI
  • vm-bhyve
    • Install/Setup
    • Networking
      • Server/Desktop LAN Bridge
      • Laptop WiFi NAT
      • Networking Restart
    • Datastores
    • Templates
    • NVMe
    • ISO Images
    • Guest OS Install
      • FreeBSD
      • Linux
      • Windows 7
      • Windows 10
        • Force Windows 10 Offline Account
        • Windows 10 Bloat Removers
      • Windows 11
    • Dealing with Locked VMs
    • Disk Resize
  • Summary

Bhyve Managers

While VirtualBox has quite usable QT based GUI – the Bhyve does not have anything like that. I once seen some GUI QT prototype for Bhyve but it was very basic – so forget about that currently. There are however several web interfaces such as TrueNAS CORE or CBSD/CloneOS. There are also several terminal managers such as vm-bhyve. The older one iohyve has not been maintained for at least 6 long years. There is also libvirt Bhyve driver but more on that later.

Bhyve versus KVM and VMware

Klara Systems compared Bhyve to KVM and Benjamin Bryan compared it against VMware hypervisor. While Bhyve remains competitive against both of them there are two important aspects from Klara Systems that stand out and are worth repeating here.

First – using nvme driver is a lot faster then using more traditional virtio-blk or ahci-hd backends.

bhyve-nvme

Second – and this one seems strange – using a raw file is faster then using ZFS zvol device.

bhyve-raw-zvol

To summarize these thoughts – just just file on a disk like disk0.img and use nvme as storage backend everytime the guest operating system supports it.

Bhyve libvirt/virt-manager GUI

Theoretically the libvirt virtualization API supports Bhyve as one of its backends and the details about Bhyve driver – https://libvirt.org/drvbhyve.html – are available here. I have tried it with virt-manager and after some basic configuration I was able to start FreeBSD 13.2 installation … but it got frozen at the kernel messages and nothing more happened.

virt-manager-boot-menu

… and the moment it hanged below. I have tried multiple times with the same effect.

virt-manager-hang

I really liked the virtual machine settings window of virt-manager.

virt-manager-machine-settings

vm-bhyve

While You can use Bhyve directly with bhyve(8) and bhyvectl(8) commands – which I was doing in the past – after trying the vm-bhyve both on the desktop and server space – I really liked it and this is what I currently use. I just moved from vm-bhyve package to the newer vm-bhyve-devel one.

The vm(8) command is simple and provides all needed use cases.

host # vm help
vm-bhyve: Bhyve virtual machine management v1.6-devel (rev. 106001)
Usage: vm ...
    version
    init
    set [setting=value] [...]
    get [all|setting] [...]
    switch list
    switch info [name] [...]
    switch create [-t type] [-i interface] [-n vlan-id] [-m mtu] [-a address/prefix-len] [-b bridge] [-p] 
    switch vlan  
    switch nat  
    switch private  
    switch add  
    switch remove  
    switch destroy 
    datastore list
    datastore add  
    datastore remove 
    datastore add  
    list
    info [name] [...]
    create [-d datastore] [-t template] [-s size] [-m memory] [-c vCPUs] 
    install [-fi]  
    start [-fi]  [...]
    stop  [...]
    restart 
    console  [com1|com2]
    configure 
    rename  
    add [-d device] [-t type] [-s size|switch] 
    startall
    stopall
    reset  [-f] 
    poweroff [-f] 
    destroy [-f] 
    passthru
    clone  
    snapshot [-f] 
    rollback [-r] 
    iso [url]
    img [url]
    image list
    image create [-d description] [-u] 
    image destroy 
    image provision [-d datastore]  

Install/Setup

We need only several packages to add.

host # pkg install -y \
         vm-bhyve-devel \
         uefi-edk2-bhyve-csm \
         bhyve-firmware \
         edk2-bhyve \
         dnsmasq \
         grub2-bhyve \
         tigervnc-viewer \
         rdesktop

The setup is pretty easy also.

First we need to add several vm_* settings into the main FreeBSD /etc/rc.conf file.

  vm_enable=YES
  vm_dir="zfs:zroot/vm"
  vm_list=""
  vm_delay=3

Keep in mind that you will later use the vm_list="" for the list of VMs that you would like to be started at boot. Like vm_list="freebsd13 freebsd14uefi" for example. Then the vm list command would place [1] in at the freebsd13 name (as its first) and [2] in the freebsd14uefi name as this one is second on the list. See below.

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC           AUTO     STATE
almalinux8     default    uefi       2    2G      0.0.0.0:5908  No       Running (11819)
freebsd13      default    bhyveload  1    256M    -             Yes [1]  Running (2342)
freebsd14      default    bhyveload  1    256M    -             No       Stopped
freebsd14uefi  default    uefi       2    8G      -             Yes [2]  Running (35394)
windows10      default    uefi       2    2G      -             No       Stopped
windows7       default    uefi       2    2G      -             No       Stopped

We need to create a dedicated ZFS dataset for our VMs. You can also use directory on UFS – check vm-bhyve documentation.

host # zfs create -o mountpoint=/vm zroot/vm

We will also copy the available templates to our new /vm dir.

host # cp -a /usr/local/share/examples/vm-bhyve /vm/.templates

Remember to check /vm/.templates/config.sample as it has the documentation for all available options.

host # head -12 /vm/.templates/config.sample
# This is a sample configuration file containing all supported options
# Please do not try and use this file itself for a guest
# For any option that contains a number in the name, such as "network0_type",
# you can add additional devices of that type by creating a new set of
# variables using the next number in sequence, e.g "network1_type"
#
# Please make sure all option names are specified in lowercase and
# at the beginning of the line. If there is any whitespace before
# the option name, the line will be ignored.
# The '#' character signifies the start of a comment, even within
# double-quotes, and so cannot be used inside any values.

We can now start initialize the vm-bhyve.

host # service vm start

Networking

There as many network setups as many FreeBSD has network capabilities – a lot! I this guide I will cover two most typical network setups for Bhyve. One would be the most server (or desktop) oriented – as it requires a LAN card to be used. The other one I would call a laptop one – that one would provide network connectivity using wlan0 WiFi interface.

No matter which one we will choose – we need to enable port forwarding on our FreeBSD host. Do that with these two commands.

host # sysrc gateway_enable=YES

host # sysctl net.inet.ip.forwarding=1

host # echo net.link.tap.up_on_open=1 >> /etc/sysctl.conf

host # sysctl net.link.tap.up_on_open=1

I assume that our FreeBSD host system would use 10.0.0.10/24 IP address and that 10.0.0.1 would be its default gateway.

Your host system main /etc/rc.conf file can looks as follows then.

host # cat /etc/rc.conf
# NETWORK
  hostname=host
  ifconfig_re0="inet 10.0.0.10/24 up"
  defaultrouter="10.0.0.1"
  gateway_enable=YES

# DAEMONS
  sshd_enable=YES
  zfs_enable=YES

# BHYVE
  vm_enable="YES"
  vm_dir="zfs:zroot/vm"
  vm_list=""
  vm_delay="3"

Server/Desktop LAN Bridge

We will use 10.0.0.0/24 network – the same that our host system uses. We will need one bridge/switch named vm-public with 10.0.0.100/24 address on it. Without that address later the dnsmasq will complain unknown interface vm-public about it. Information about the switches is kept in the /vm/.config/system.conf file. We will also need to add out LAN interface to the public switch. It will beΒ re0 interface in my case.

host # vm switch create -a 10.0.0.100/24 public

host # vm switch add public re0

host # vm switch list
NAME    TYPE      IFACE      ADDRESS        PRIVATE  MTU  VLAN  PORTS
public  standard  vm-public  10.0.0.100/24  no       -    -     re0

host # cat /vm/.config/system.conf
switch_list="public"
type_public="standard"
addr_public="10.0.0.100/24"
ports_public="re0"

To be honest the networking part setup is complete.

When you will be setting up your Bhyve VMs you will either use static 10.0.0.0/24 IP address space or just use DHCP and the one that is already on your network will take care of the rest (assuming you have one).

If you do not have one you may use dnsmasq service to do that easily.

host # cat /usr/local/etc/dnsmasq.conf
port=0
no-resolv
server=1.1.1.1
except-interface=lo0
bind-interfaces
local-service
dhcp-authoritative
interface=vm-public
dhcp-range=10.0.0.69,10.0.0.96

host # service dnsmasq enable

host # service dnsmasq start

That should do.

Now – to access the VMs in this bridged networking mode you can just ssh(1) to their IP address directly.

Laptop WiFi NAT

This is one of the cases where VirtualBox has one more feature over Bhyve. With VirtualBox its possible to use bridge mode over WiFi interface. Its not possible with Bhyve currently. I have submitted a proposal to FreeBSD Foundation to implement such configuration – especially as open source VirtualBox code already exists. Time will tell if it will be implemented or if there would be more important tasks to take care of.

We will use 10.1.1.0/24 network for our VM needs. We will also need only one vm-bhyve switch that we will use – and it will be the vm-public one with 10.1.1.1/24 address – we will be using it as a gateway for our VMs in NAT. Information about the switches is kept in the /vm/.config/system.conf file.

host # vm switch create -a 10.1.1.1/24 public

host # vm switch list
NAME    TYPE      IFACE      ADDRESS      PRIVATE  MTU  VLAN  PORTS
public  standard  vm-public  10.1.1.1/24  no       -    -     -

host # cat /vm/.config/system.conf 
switch_list="public"
type_public="standard"
addr_public="10.1.1.1/24"

Now the NAT part – we will do that with very simple pf(4) config.

host # cat /etc/pf.conf
# SKIP LOOPBACK
  set skip on lo0

# bhyve(8) VMS NAT 
  nat on wlan0 from {10.1.1.1/24} to any -> (wlan0)

# PASS IN/OUT ALL
  pass in all
  pass out all

host # service pf enable

host # service pf start

You can check the stats of that pf(4) rules like that.

host # pfctl -Psn -vv
No ALTQ support in kernel
ALTQ related functions disabled
@0 nat on wlan0 inet from 10.1.1.0/24 to any -> (wlan0) round-robin
  [ Evaluations: 18774     Packets: 362277    Bytes: 352847937   States: 0     ]
  [ Inserted: uid 0 pid 69837 State Creations: 38    ]

Feel free to add all your pf(4) rules into the /etc/pf.conf file.

Now the DHCP server. For simplicity of the setup we will use dnsmasq daemon – but nothing prevents you from setting up a Highly Available DHCP Server instead using isc-dhcp44-server package.

host # cat /usr/local/etc/dnsmasq.conf
port=0
no-resolv
server=1.1.1.1
except-interface=lo0
bind-interfaces
local-service
dhcp-authoritative
interface=vm-public
dhcp-range=10.1.1.11,10.1.1.99

host # service dnsmasq enable

host # service dnsmasq start

Now you should be ready to setup Bhyve VMs on your laptop.

When I was using VirtualBox – it allowed me to use its Port Forwarding feature where I could add as many mappings in the NAT network type as shown below.

vbox-port-forwarding

… but with Bhyve its even better as no port forwarding in the NAT mode is needed at all! πŸ™‚

You can access the Bhyve VMs in NAT networking mode the same as you can in the bridged mode – with just ssh(1) to their IP address from directly from the 10.1.1.0/24 range.

So to put it bluntly – if a Bhyve VM in NAT mode got 10.1.1.33/24 IP address – then just ssh(1) to that IP address directly from the host system.

Networking Restart

Sometimes – when for example you laptop will boot without network connectivity – the tap(4) interfaces sometimes do not went UP.

bhyve-networking-restart

There is simple fix tor that problem – bhyve-network-restart.sh script.

Its shown below.

# ADD IP ADDRESS TO EACH vm-bhyve SWITCH
vm switch list \
  | sed 1d \
  | while read NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS a0 a1 a2 a3 a4 a5 a6 a7 a8 a9
    do
      if [ "${ADDRESS}" != "-" ]
      then
             vm switch address ${NAME} ${ADDRESS}
        echo vm switch address ${NAME} ${ADDRESS}
      fi
    done        

# SET TO 'up' ALL vm-bhyve SWITCH MEMBERS
vm switch list \
  | sed 1d \
  | awk '{print $1}' \
  | while read SWITCH
    do
      ifconfig vm-${SWITCH} \
        | awk '/member:/ {print $2}' \
        | while read INTERFACE
          do
                 ifconfig ${INTERFACE} up
            echo ifconfig ${INTERFACE} up
          done
    done


Execute it everytime you lost connectivity with your VMs and you are done.

Datastores

While vm-bhyve supports multiple datastores – you will only need one – the default one.

host # vm datastore list
NAME            TYPE        PATH                      ZFS DATASET
default         zfs         /vm                       zroot/vm

Snapshots and Clones

The vm-bhyve also supports snapshots and clones of the VMs disks. Generally they are just ZFS snapshots and clones.

Templates

While vm-bhyve comes with several handy templates – they are incomplete – and small several changes makes the game more playable.

NVMe

First – we will implement the things that we know work faster – the nvme type for disk images instead of virt-blk or ahci-hd ones. Of course not all operating systems have support for such devices – for them we will use the latter options.

A fast way to change it to nvme is below.

host # sed -i '' s.virtio-blk.nvme.g /vm/.templates/freebsd.conf

ISO Images

Each VM needs an ISO image from which it will be installed. Of course you can also just create new VM and copy the disk contents from other server or use one of the FreeBSD images.

There are two ways to feed vm-bhyve with ISO images.

One is to fetch them from some URL.

host # vm iso http://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/13.2/FreeBSD-13.2-RELEASE-amd64-disc1.iso

host # vm iso
DATASTORE           FILENAME
default             FreeBSD-13.2-RELEASE-amd64-disc1.iso

The other way is to just simple copy ISO file to the /vm/.iso directory.

host # cp /home/vermaden/download/ubuntu-mate-23.04-desktop-amd64.iso /vm/.iso/

host # vm iso
DATASTORE           FILENAME
default             FreeBSD-13.2-RELEASE-amd64-disc1.iso
default             ubuntu-mate-23.04-desktop-amd64.iso

Guest OS Install

Generally each VM install is very similar as shown below.

host # vm create -t TEMPLATE NAME

host # vm install NAME ISO

host # vm console NAME

Example for FreeBSD is below.

host # vm create -t freebsd freebsd13

host # vm install freebsd13 FreeBSD-13.2-RELEASE-amd64-disc1.iso
Starting freebsd13
  * found guest in /vm/freebsd13
  * booting...

host # vm console freebsd13

You will probably see something like that below.

freebsd-loader-menu

Then you do the installation in the text mode and after reboot you have your running FreeBSD VM.

host # vm list
NAME          DATASTORE  LOADER     CPU  MEMORY  VNC  AUTO     STATE
freebsd13     default    bhyveload  1    256M    -    Yes [1]  Running (85315)

Some more info to display can be shown with info argument.

host # vm info freebsd13
------------------------
Virtual Machine: freebsd13
------------------------
  state: stopped
  datastore: default
  loader: bhyveload
  uuid: a91287a1-39d3-11ee-b73d-f0def1d6aea1
  cpu: 1
  memory: 256M

  network-interface
    number: 0
    emulation: virtio-net
    virtual-switch: public
    fixed-mac-address: 58:9c:fc:0b:98:30
    fixed-device: -

  virtual-disk
    number: 0
    device-type: file
    emulation: nvme
    options: -
    system-path: /vm/freebsd13/disk0.img
    bytes-size: 21474836480 (20.000G)
    bytes-used: 885089280 (844.086M)

  snapshots
    zroot/vm/freebsd13@fresh    85.2M   Mon Aug 14 11:18 2023

host # env EDITOR=cat vm configure freebsd13
loader="bhyveload"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
uuid="a91287a1-39d3-11ee-b73d-f0def1d6aea1"
network0_mac="58:9c:fc:0b:98:30"

If you want to edit and not only display the VM config use this.

host # vm configure freebsd13

FreeBSD

FreeBSD can be boot in two ways. One is with bhyveload which may be translated to legacy BIOS boot. You can also of course boot FreeBSD un UEFI mode.

host # cat /vm/.templates/freebsd.conf
loader="bhyveload"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"

The above will use bhyveload and it mostly works … but sometimes if you want to install a lot newer version under Bhyve the loader may not have all the needed features. I was hit by this problem recently where I used FreeBSD 13.2-RELEASE for the FreeBSD host system and wanted to try 14.0-ALPHA1 version.

I described the details of this problem here – FreeBSD Bug 273099 – in a BUG report.

This is how such error looks like:

| FreeBSD/amd64 User boot lua, Revision 1.2
| ZFS: unsupported feature: com.klarasystems:vdev_zaps_v2
| ERROR: cannot open /boot/lua/loader.lua: no such file or directory.
| 
| Type '?' for a list of commands, 'help' for more detailed help.
| OK 

To overcome that you will need latest (more up to date then 14.0-ALPHA1 version) FreeBSD sources and below commands.

host # pkg install gitup

host # cp /usr/local/etc/gitup.conf.sample /usr/local/etc/gitup.conf

host # gitup current

host # cd /usr/src/stand

host # make

host # find /usr/obj -type f -name userboot_lua.so
/usr/obj/usr/src/amd64.amd64/stand/userboot/userboot_lua/userboot_lua.so

host # cp /usr/obj/usr/src/amd64.amd64/stand/userboot/userboot_lua/userboot_lua.so /vm/userboot_lua.so

Now – we need to add bhyveload_loader="/vm/userboot_lua.so" option to out FreeBSD 14.0-ALPHA1 machine config.

host # cat /vm/freebsd14/freebsd14.conf
loader="bhyveload"
bhyveload_loader="/vm/userboot_lua.so"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
uuid="975bca2a-39c4-11ee-b73d-f0def1d6aea1"
network0_mac="58:9c:fc:03:67:47"

Now it will boot properly.

Of course it was very easy to overcome that using UEFI boot instead.

host # cat /vm/freebsd14uefi/freebsd14uefi.conf
loader="uefi"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
uuid="35ca42b7-7f28-43eb-afd9-2488c5ec83cf"
network0_mac="58:9c:fc:0a:16:4b"

Linux

By default for Linux the grub way is the proposed way. I do not use it as at it olny allows console access – and even many so called enterprice grade Linux distributions such as AlmaLinux or Rocky have graphical installer that needs/wants graphical display … and that is only available in uefi mode.

Maybe for Alpine or Void Linux such approach may be usable … but uefi will also work very well – thus I do not see ANY advantages of using grub way here.

I will show you the next example based on AlmaLinux 8.x install but the same worked properly with Ubuntu Mate for example.

First the default template.

host # cat /vm/.templates/linux.conf
loader="uefi"
cpu=2
memory=4G
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
xhci_mouse="yes"
graphics="yes"

The above added xhci_mouse="yes" uses more precise xhci(4) USB 3.x mouse driver and graphics="yes" forces the exposure of VNC connection.

With such template the installation looks like that.

host # cp AlmaLinux-8.8-x86_64-minimal.iso /vm/.iso/

host # vm create -t linux almalinux8

host # vm install almalinux8 AlmaLinux-8.8-x86_64-minimal.iso
Starting almalinux8
  * found guest in /vm/almalinux8
  * booting...

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC           AUTO  STATE
almalinux8     default    uefi       2    4G      0.0.0.0:5900  No    Running (11819)

host % vncviewer -SendClipboard -AcceptClipboard -LowColorLevel -QualityLevel 6 :5900 &

The last vncviewer(1) command is executed as regular user. It comes from net/tigervnc-viewer package.

If you will be connecting to some external server then use IP address in the command.

host % vncviewer -SendClipboard -AcceptClipboard -LowColorLevel -QualityLevel 6 10.0.0.66::5900 &

After the Linux system is installed you may specify the exact VNC port or IP address. Also the screen resolution or enable/disable waiting for the VNC connection.

graphics_port="5900"
graphics_listen="0.0.0.0"
graphics_res="1400x900"
graphics_wait="no"

Windows 7

A lot of people will criticize me for this one – as Windows 7 is not an officially supported version anymore. I do not care about that when I want to use some localhost software … or older software that works better on older version. Not to mention that its one of the last Windows versions that does not force online Microsoft account down your throat. It also uses less resources and is more responsive.

First – the template – similar to the Linux one.

host # cat /vm/.templates/windows7.conf           
loader="uefi"
graphics="yes"
cpu=2
memory=2G
ahci_device_limit="8"
network0_type="e1000"
network0_switch="public"
disk0_type="ahci-hd"
disk0_name="disk0.img"
disk0_opts="sectorsize=512"
utctime="no"
bhyve_options="-s 8,hda,play=/dev/dsp,rec=/dev/dsp"

If you set the xhci_mouse="yes" option with Windows 7 – you will end up without a working mouse in VNC connection and you will have to make all the install and configuration by keyboard only.

One may think about adding xhci_mouse="yes" after installation when you will already have working RDP connection – but that would also reqiure additional drivers. In theory – the device VEN_8086&DEV_1E31 name is recognized as Intel USB 3.0 eXtensible Host Controller … but for some reason anytime I wanted to install it – the Windows 7 system crashed and instantly rebooted.

The other even more imporant thing is having the disk0_opts="sectorsize=512" option. Without it the Windows 7 instaler will fail with the following error.

win-7-install-error-NO-512-BLOCKS

The last option bhyve_options="-s 8,hda,play=/dev/dsp,rec=/dev/dsp" enables audio.

The install procedure is also similar to Linux.

host # cp win_7_amd64_sp1_en.iso /vm/.iso/

host # vm iso
DATASTORE           FILENAME
default             win_7_amd64_sp1_en.iso

host # vm create -t windows7 -s 40G windows7

host # vm install windows7 win_7_amd64_sp1_en.iso
Starting windows7
  * found guest in /vm/windows7
  * booting...

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC           AUTO  STATE
windows7       default    uefi       2    2G      0.0.0.0:5900  No    Running (11819)

host % vncviewer -SendClipboard -AcceptClipboard -LowColorLevel -QualityLevel 6 :5900 &

After the install we should enable RDP connections for more features. Rememeber to select any version option.

win-7-computer-properties-advanced-remote

You can add one or more CD-ROM drives with following options in the configure argument.

disk1_type="ahci-cd"
disk1_dev="custom"
disk1_name="/vm/.iso/virtio-drivers.iso"

It would be easier for RDP connections to have static IP instead of a DHCP one.

win-7-network-adapter-IPv4-static

Now as we have the static 10.1.1.7 IP address we can use RDP connection with rdesktop(1) command.

host % rdesktop -u buser -p bpass -P -N -z -g 1800x1000 -a 24 -r sound:local -r disk:HOME=/home/vermaden 10.1.1.7
Autoselecting keyboard map 'en-us' from locale

ATTENTION! The server uses and invalid security certificate which can not be trusted for
the following identified reasons(s);

 1. Certificate issuer is not trusted by this system.

     Issuer: CN=vbox


Review the following certificate info before you trust it to be added as an exception.
If you do not trust the certificate the connection atempt will be aborted:

    Subject: CN=vbox
     Issuer: CN=vbox
 Valid From: Mon Aug 14 00:58:25 2023
         To: Mon Feb 12 23:58:25 2024

  Certificate fingerprints:

       sha1: 4ad853c40a8aa0560af315b691038202506e07ce
     sha256: 44ec8f7650486aef6261aea42da99caba4e84d7bc58341c0ca1bb8e28b81d222


Do you trust this certificate (yes/no)? yes
Connection established using SSL.

There are several useful options here.

The -u buser and -p bpass will take care of credentials.

The -P option enables caching of bitmaps to disk (persistent bitmap caching). This improves performance (especially on low bandwidth connections) and reduces network traffic.

The -N option enables numlock synchronization between the X11 server and remote RDP session.

The -z enables compression of the RDP datastream.

The -g 1800x1000 and -a 24 specifies resolution and color depth rate.

The -r disk:HOME=/home/vermaden enables transparent sharing of your home directory and additional share is shown in My Computer in the Windows 7 machine – very handy for sharing files between the host and guest VM as chown below.

win-7-sharing

The last one option -r sound:local specifies that the audio will be realized on the guest VM – this will only work if you added the bhyve_options="-s 8,hda,play=/dev/dsp,rec=/dev/dsp" to the Windows 7 Bhyve config. Alternatively without that hda(4) emulation you can use -r sound:remote option – this would use RDP protocol to transfer audio events from the guest machine to your host machine and then audio will be played then locally on your host machine.

Windows 10

Finally a supported version.

Template is similar to the Windows 7 one.

host # cat /vm/.templates/windows10.conf
loader="uefi"
graphics="yes"
xhci_mouse="yes"
cpu=2
memory=2G
ahci_device_limit="8"
network0_type="e1000"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"
utctime="no"
bhyve_options="-s 8,hda,play=/dev/dsp,rec=/dev/dsp"

The Windows 10 supports the xhci_mouse="yes" so we enable and keep it all the time.

The Windows 10 does not need the disk0_opts="sectorsize=512" option.

As Windows 10 is newer – the nvme can (and should) be used for performance reasons.

The last option bhyve_options="-s 8,hda,play=/dev/dsp,rec=/dev/dsp" enables audio.

The install procedure is also similar to Windows 7.

host # cp win_10_amd64_en_LTSC.iso /vm/.iso/

host # vm iso
DATASTORE           FILENAME
default             win_10_amd64_en_LTSC.iso

host # vm create -t windows10 -s 40G windows10

host # vm install windows10 win_10_amd64_en_LTSC.iso
Starting windows10
  * found guest in /vm/windows10
  * booting...

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC           AUTO  STATE
windows10      default    uefi       2    2G      0.0.0.0:5900  No    Running (11819)

host % vncviewer -SendClipboard -AcceptClipboard -LowColorLevel -QualityLevel 6 :5900 &

After the install we should enable RDP connections for more features. Remember to select any version option.

win-10-advanced-settings-remote

You can add one or more CD-ROM drives with following options in the configure argument.

disk1_type="ahci-cd"
disk1_dev="custom"
disk1_name="/vm/.iso/virtio-drivers.iso"

It would be easier for RDP connections to have static IP instead of a DHCP one.

win-10-network-settings-adapter-properties-IPv4-static

Now as we have the static 10.1.1.8 IP address we can use RDP connection with rdesktop(1) command.

host % rdesktop -u buser -p bpass -P -N -z -g 1600x900 -a 24 -r sound:local -r disk:HOME=/home/vermaden 10.1.1.8
Autoselecting keyboard map 'en-us' from locale

ATTENTION! The server uses and invalid security certificate which can not be trusted for
the following identified reasons(s);

 1. Certificate issuer is not trusted by this system.

     Issuer: CN=DESKTOP-HKJ3H6T


Review the following certificate info before you trust it to be added as an exception.
If you do not trust the certificate the connection atempt will be aborted:

    Subject: CN=DESKTOP-HKJ3H6T
     Issuer: CN=DESKTOP-HKJ3H6T
 Valid From: Mon Aug 14 10:33:41 2023
         To: Tue Feb 13 09:33:41 2024

  Certificate fingerprints:

       sha1: 967d5cdb164e53f7eb4c5c17b0343f2f279fb709
     sha256: c08b732122a39c44d91fac2a9093724da12d2f3e6ea51613245d13cf762f4cd2


Do you trust this certificate (yes/no)? yes

Options are the same as with Windows 7 and they are described in the Windows 7 section.

Force Windows 10 Offline Account

To force creation of local account instead of forced online account you need to boot the Windows 10 without network.

Do the following steps to do that.

host # yes | vm poweroff windows10

host # vm configure windows10
- network0_type="e1000"
- network0_switch="public"

host # vm start windows10

Now create the offline account.

After creating it poweroff the Windows 10 VM.

host # vm configure windows10
+ network0_type="e1000"
+ network0_switch="public"

host # vm start windows10

Now you have local account on Windows 10 system.

Windows 10 Bloat Removers

You may consider using on of the known Windows 10 bloat removers available here:

Windows 11

The setup/install of Windows 11 is the same as Windows 10.

Dealing with Locked VMs

Lets assume that our host system crashed.

The vm-bhyve will left run.lock files in the machines dirs.

host # ls -l /vm/freebsd14uefi
total 1389223K
-rw-r--r-- 1 root wheel          32 2023-08-16 23:36 console
-rw------- 1 root wheel 21474836480 2023-08-16 23:46 disk0.img
-rw-r--r-- 1 root wheel         200 2023-08-16 23:35 freebsd14uefi.conf
-rw-r--r-- 1 root wheel          11 2023-08-16 23:36 run.lock
-rw-r--r-- 1 root wheel        5583 2023-08-16 23:36 vm-bhyve.log

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC  AUTO     STATE
almalinux8     default    uefi       2    2G      -    No       Stopped
freebsd13      default    bhyveload  1    256M    -    Yes [1]  Running (19258)
freebsd13alt   default    bhyveload  1    256M    -    No       Stopped
freebsd14      default    bhyveload  1    256M    -    No       Stopped
freebsd14uefi  default    uefi       2    8G      -    No       Locked (w520.local)
windows10ltsc  default    uefi       2    2G      -    No       Stopped
windows7       default    uefi       2    2G      -    No       Stopped

host # rm /vm/freebsd14uefi/run.lock

host # vm list
NAME           DATASTORE  LOADER     CPU  MEMORY  VNC  AUTO     STATE
almalinux8     default    uefi       2    2G      -    No       Stopped
freebsd13      default    bhyveload  1    256M    -    Yes [1]  Running (19258)
freebsd13alt   default    bhyveload  1    256M    -    No       Stopped
freebsd14      default    bhyveload  1    256M    -    No       Stopped
freebsd14uefi  default    uefi       2    8G      -    No       Stopped
windows10ltsc  default    uefi       2    2G      -    No       Stopped
windows7       default    uefi       2    2G      -    No       Stopped

Now you may want to start the locked machine properly.

Disk Resize

By default vm-bhyve will create disks with 20 GB in size.

To resize the Bhyve virtual machine disk we would use truncate(1) command.

host # vm stop freebsd13

host # cd /vm/freebsd13

host # truncate -s 40G disk0.img

host # vm start freebsd13

If you are not sure about that – you may work on a copy instead.

host # vm stop freebsd13

host # truncate -s 40G disk0.img.NEW

host # dd bs=1m if=disk0.img of=disk0.img.NEW conv=notrunc status=progress
  20865613824 bytes (21 GB, 19 GiB) transferred 43.002s, 485 MB/s
20480+0 records in
20480+0 records out
21474836480 bytes transferred in 43.454036 secs (494196586 bytes/sec)

host # mv disk0.img disk0.img.BACKUP

host # mv disk0.img.NEW disk0.img

host # vm start freebsd13

Now we need to resize the filesystem inside the VM.

freebsd13 # lsblk
DEVICE         MAJ:MIN SIZE TYPE                                          LABEL MOUNT
nvd0             0:90   40G GPT                                               - -
  nvd0p1         0:91  512K freebsd-boot                           gpt/gptboot0 -
           -:-   492K -                                                 - -
  nvd0p2         0:92  2.0G freebsd-swap                              gpt/swap0 SWAP
  nvd0p3         0:93   18G freebsd-zfs                                gpt/zfs0 
           -:-   1.0M -                                                 - -

freebsd13 # geom disk list
Geom name: nvd0
Providers:
1. Name: nvd0
   Mediasize: 42949672960 (40G)
   Sectorsize: 512
   Mode: r2w2e3
   descr: bhyve-NVMe
   lunid: 589cfc2081410001
   ident: NVME-4-0
   rotationrate: 0
   fwsectors: 0
   fwheads: 0

freebsd13 # gpart show
=>      40  41942960  nvd0  GPT  (40G) [CORRUPT]
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  37744640     3  freebsd-zfs  (18G)
  41940992      2008        - free -  (1.0M)

freebsd13 # gpart recover nvd0
nvd0 recovered

freebsd13 # gpart show
=>      40  83886000  nvd0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  37744640     3  freebsd-zfs  (18G)
  41940992  41945048        - free -  (20G)

freebsd13 # gpart resize -i 3 -a 1m nvd0
nvd0p3 resized

freebsd13 # gpart show
=>      40  83886000  nvd0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

freebsd13 # zpool status
  pool: zroot
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          nvd0p3    ONLINE       0     0     0

freebsd13 # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  17.5G  17.0G   544M        -         -    87%    96%  1.00x    ONLINE  -

freebsd13 # zpool set autoexpand=on zroot

freebsd13 # zpool online -e zroot nvd0p3

freebsd13 # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  37.5G  17.0G  20.5G        -         -    41%    45%  1.00x    ONLINE  -

Summary

I hope I was able to provide all needed information.

Let me know in comments if I missed something.

UPDATE 1 – The sysutils/edk2 Issue

Recently a lot of people started to get problems with running UEFI machines on Bhyve. After a short investigation (details in the 273560 BUG report) the root of all cause was the new sysutils/edk2 version that caused the problem. The problem does not exists as long as You use the -A flag for bhyve(8) command. Unfortunately its not the default for vm-bhyve and -A options is needed in the bhyve_options parameter in each UEFI boot VM.

For example:

host % grep bhyve_options /vm/freebsd14uefi/freebsd14uefi.conf 
bhyve_options="-A"

Additional details for vm-bhyve available HERE.

Hope that helps.

EOF

NFSv4 Server Inside FreeBSD VNET Jail

Not so long ago I wrote an article about running NFS Server Inside FreeBSD VNET Jail which was possible by then with using the net/unfs3 NFS server from the FreeBSD Ports that run in the userspace. The kernel solution was not possible by then and while this little daemon comes handy – it is limited to NFSv3 only. The status quo changed recently and mainly thanks to Rick Macklem who made the changes and created needed patches. The FreeBSD 2022 Q4 Status Report had even it described as one of the ongoing projects and it looked like that:

status

You can check Rick work and commits about this topic in the usual places:

He even created a short HOWTO on how to test this new feature.

The good thing is that You do not longer need to patch and rebuild FreeBSD to have this ability to run nfsd(8) NFSv4 server inside FreeBSD Jail. Its already ‘upstream’ in the 14-CURRENT and 13-STABLE branches.

Today I am going to test how that new feature works and I will show you how to configure it within a VNET Jail. If you would like to know how the VNET Jails work take a look at my recent FreeBSD Jails Containers article for the details.

The Table of Contents for this article is listed below.

  • Test Environment
  • FreeBSD Host Setup
  • FreeBSD VNET Jail Setup
  • FreeBSD Client Setup
  • Important Rick Notes
  • Summary

Test Environment

The host machine is a typical FreeBSD server installed on a physical or virtual machine. The exact ISO image I used was FreeBSD-13.2-STABLE-amd64-20230615-894492f5bf4e-255597-disc1.iso from 2023/06/15 so anything newer then that should also work. The nfsd is a FreeBSD VNET Jail that will run on the host system. The client will be running FreeBSD 13.2-RELEASE where we will mount the NFSv4 share.

Below you will find the list of systems that we will use in this guide.

       IP  ROLE    VERSION
20.0.0.10  client  13.2-RELEASE @254617
20.0.0.20  host    13.2-STABLE  @255597
20.0.0.30  nfsd    13.2-STABLE  @255597

FreeBSD Host Setup

First we will install host machine with typical ZFS install – nothing special about that – just pick Auto (ZFS) option from bsdinstall(8) installer.

Below you will find content of needed configuration files – such as /etc/rc.conf file.

host # cat /etc/rc.conf
# NETWORK
  hostname="host"
  cloned_interfaces="bridge0"
  ifconfig_em0="up"
  ifconfig_bridge0="inet 20.0.0.20/24 up addm em0"
  defaultrouter="20.0.0.1"
  gateway_enable=YES

# DAEMONS
  sshd_enable=YES
  zfs_enable=YES

# JAILS
  jail_enable=YES
  jail_parallel_start=YES
  jail_list="nfsd"

We will keep our Jails under the /jail path. Lets create these datasets now.

host # zfs create -o mountpoint=/jail -p zroot/jail
host # zfs create                     -p zroot/jail/BASE
host # zfs create                     -p zroot/jail/nfsd

I also created the /jail/BASE and /jail/nfsd dirs. We will keep various FreeBSD versions *-base.txz files in the first one and the /jail/nfsd will be used as a place for our nfsd(8) server VNET Jail.

host # find /jail -maxdepth 1
/jail
/jail/BASE
/jail/nfsd

As we already have the 13.2-STABLE ISO file we can just copy the needed base.txz file from there.

host # mdconfig -a -t vnode -f FreeBSD-13.2-STABLE-amd64-20230615-894492f5bf4e-255597-disc1.iso
md0

host # mkdir ISO

host # mount -t cd9660 /dev/md0 ISO

host # cp ISO/usr/freebsd-dist/base.txz /jail/BASE/13.2-STABLE-255597-base.txz

host # umount ISO

host # rm -r ISO

host # mdconfig -d -u /dev/md0

Lets not forget about DNS – we will use /etc/hosts file for this purpose.

host # cat /etc/hosts
127.0.0.1       localhost localhost.my.domain
::1             localhost localhost.my.domain

20.0.0.10       client
20.0.0.20       host
20.0.0.30       nfsd

This is how the host network interfaces and routes look like.

host # ifconfig em0; ifconfig bridge0; ifconfig epair30a;
em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4810099<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
        ether 08:00:27:09:cc:a8
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:b6
        inet 20.0.0.20/24 broadcast 20.0.0.255
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: epair30a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 4 priority 128 path cost 2000
        member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 1 priority 128 path cost 2000000
        groups: bridge
        nd6 options=9<PERFORMNUD,IFDISABLED>
epair30a: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: jail:nfsd
        options=8<VLAN_MTU>
        ether 02:82:ef:85:a1:0a
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

host # route get 0
   route to: default
destination: default
       mask: default
    gateway: 20.0.0.1
        fib: 0
  interface: bridge0
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

host # netstat -Win -f inet
Name      Mtu Network            Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
lo0         - 127.0.0.0/8        127.0.0.1                0     -     -        0     -     -
bridge0     - 20.0.0.0/24        20.0.0.20              967     -     -      743     -     -

FreeBSD VNET Jail Setup

In this step we will prepare our VNET Jail.

host # tar -xf /jail/BASE/13.2-STABLE-255597-base.txz -C /jail/nfsd --unlink

Now the /etc/jail.conf.d/nfsd.conf config file for our Jail.

host # cat /etc/jail.conf.d/nfsd.conf
nfsd {
  # STARTUP/LOGGING
    exec.start = "/bin/sh /etc/rc";
    exec.stop  = "/bin/sh /etc/rc.shutdown";
    exec.consolelog = "/var/log/jail_console_${name}.log";

  # PERMISSIONS
    allow.raw_sockets;
    mount.devfs;
    exec.clean;

  # PATH/HOSTNAME
    path = "/jail/${name}";
    host.hostname = "${name}";

  # VNET/VIMAGE
    vnet;
    vnet.interface = "${if}b";

  # NFSD/VNET
    allow.nfsd;
    enforce_statfs = 1;

  # NETWORKS/INTERFACES
    $id = "30";
    $ip = "20.0.0.${id}/24";
    $gw = "20.0.0.1";
    $br = "bridge0";
    $if = "epair${id}";

  # ADD TO bridge0 INTERFACE
    exec.prestart += "ifconfig ${if} create up";
    exec.prestart += "ifconfig ${if}a up descr jail:${name}";
    exec.prestart += "ifconfig ${br} addm ${if}a up";
    exec.start    += "ifconfig ${if}b ${ip} up";
    exec.start    += "route add default ${gw}";
    exec.poststop += "ifconfig ${if}a destroy";
}

We can now start our nfsd VNET Jail.

host # service jail start nfsd

host # jls
   JID  IP Address      Hostname                      Path
     1                  nfsd                          /jail/nfsd

host # jexec nfsd

root@nfsd:/ # ifconfig | grep 'inet '
        inet 127.0.0.1/8
        inet 20.0.0.30/24 broadcast 20.0.0.255

Our VNET Jail seems to work properly. Lets check its network connectivity.

root@nfsd:/ # ifconfig epair30b
epair30b: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 02:54:54:dd:f0:0b
        inet 20.0.0.30/24 broadcast 20.0.0.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

root@nfsd:/ # route get 0
   route to: default
destination: default
       mask: default
    gateway: 20.0.0.1
        fib: 0
  interface: epair30b
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

root@nfsd:/ # ping -c 3 20.1
PING 20.1 (20.0.0.1): 56 data bytes
64 bytes from 20.0.0.1: icmp_seq=0 ttl=255 time=0.480 ms
64 bytes from 20.0.0.1: icmp_seq=1 ttl=255 time=0.612 ms
64 bytes from 20.0.0.1: icmp_seq=2 ttl=255 time=0.357 ms

--- 20.1 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.357/0.483/0.612/0.104 ms

root@nfsd:/ # nc -v -u 1.1.1.1 53
Connection to 1.1.1.1 53 port [udp/domain] succeeded!
^C

Works as it should.

To make the network setup complete we will now put the same contents into the /etc/hosts file as they are on the host system.

root@nfsd:/ # cat /etc/hosts
127.0.0.1       localhost localhost.my.domain
::1             localhost localhost.my.domain

20.0.0.10       client
20.0.0.20       host
20.0.0.30       nfsd

We need to add two lines to the /etc/sysctl.conf file.

root@nfsd:/ # cat /etc/sysctl.conf

# VNET/NFSD
vfs.nfs.enable_uidtostring=1
vfs.nfsd.enable_stringtouid=1

The main /etc/rc.conf file in the nfsd VNET Jail looks as follows.

root@nfsd:/# cat /etc/rc.conf
# DAEMONS
  sshd_enable=YES
  nfs_server_enable=YES
  nfsv4_server_only=YES
  nfs_server_flags="-t"

I enabled and force only the NFSv4 version. That will allow us to server NFSv4 share with only single 2049 TCP port – which is very firewall friendly πŸ™‚

Now we will create our NFS share under /share dir and start the nfsd(8) NFS server. We will also populate the /etc/exports with config to share that /share dir.

root@nfsd:/ # mkdir /share

root@nfsd:/ # cat /etc/exports
V4: /           -sec=sys               -network 20.0.0.0/24
/share          -sec=sys -maproot=root -network 20.0.0.0/24

We will now restart our FreeBSD VNET Jail to make these changes take effect.

host # service jail restart nfsd
Stopping jails: nfsd.
Starting jails: nfsd.

root # jls
   JID  IP Address      Hostname                      Path
     2                  nfsd                          /jail/nfsd

host # jexec nfsd

root@nfsd:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sendmail   66969 4  tcp4   127.0.0.1:25          *:*
root     sshd       66195 4  tcp4   *:22                  *:*
root     nfsd       61939 5  tcp4   *:2049                *:*

Our VNET Jail setup is complete. The nfsd(8) server listens at its default 2049 port. We will not move on to the client setup.

FreeBSD Client Setup

Our client system is FreeBSD 13.2-RELEASE system – again – installed in a simple Auto (ZFS) way chosen in the FreeBSD installer.

We will start with the /etc/hosts file.

client # cat /etc/hosts
127.0.0.1       localhost localhost.my.domain
::1             localhost localhost.my.domain

20.0.0.10       client
20.0.0.20       host
20.0.0.30       nfsd

Next is the main FreeBSD /etc/rc.conf config file.

client # cat /etc/rc.conf
# NETWORK
  hostname="client"
  ifconfig_em0="inet 20.0.0.10/24"
  defaultrouter="20.0.0.1"
  sshd_enable=YES
  zfs_enable=YES
  nfs_client_enable=YES

The important part for NFS here is the nfs_client_enable=YES part.

We may now try to mount our NFSv4 share from the nfsd VNET Jail on our client system.

client # service nfsclient start
NFS access cache time=60

client # ping -c 3 nfsd
PING nfsd (20.0.0.30): 56 data bytes
64 bytes from 20.0.0.30: icmp_seq=0 ttl=64 time=0.540 ms
64 bytes from 20.0.0.30: icmp_seq=1 ttl=64 time=0.465 ms
64 bytes from 20.0.0.30: icmp_seq=2 ttl=64 time=0.589 ms

--- nfsd ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.465/0.531/0.589/0.051 ms

client # nc -v nfsd 2049
Connection to nfsd 2049 port [tcp/nfsd] succeeded!
^C

client # mount -o nfsv4 nfsd:/share /mnt

client # mount -t nfs
nfsd:/share on /mnt (nfs, nfsv4acls)

client # cd /mnt

client # echo asd > asd

client # ls -l asd
-rw-r--r--  1 root  wheel  4 Jun 30 03:06 asd

client # rm asd

Viola! Works as advertised πŸ™‚

We can also make that NFS mount permanent and mounted automatically at boot using the /etc/fstab file.

client # tail -3 /etc/fstab

# NFSD @ VNET nfsv4
nfsd:/share             /mnt                    nfs     rw,nfsv4,late    0 0

How it looks live.

network

Important Rick Notes

While the above setup works properly I would like to share some of Rick notes from his HOWTO.

To make such a VNET Jail serve the NFS shares these requirements need to be met:

  • Jail must be on a separate filesystem.
  • Jail root directory must be a filesystem mount point.
  • Jail config needs allow.nfsd; option added.
  • Jail config needs enforce_statfs = 1; to export shares mounted below root of Jail.Most of the vfs.nfsd.* settings must be done on the /etc/sysctl.conf host config.

    Currently the only vfs.nfsd.* settings available within VNET Jail are:

  • vfs.nfsd.server_min_nfsvers
  • vfs.nfsd.server_max_nfsvers
  • vfs.nfs.enable_uidtostring
  • vfs.nfsd.enable_stringtouid
  • vfs.nfsd.fha.enable
  • vfs.nfsd.fha.read
  • vfs.nfsd.fha.write
  • vfs.nfsd.fha.bin_shift
  • vfs.nfsd.fha.max_nfsds_per_fh
  • vfs.nfsd.fha.max_reqs_per_nfsd

    Summary

    As you can see Rick did a very solid job here – the NFS server works really well in the FreeBSD VNET Jails.

    Let me know in the comments if I forgot about anything.

    EOF

FreeBSD Jails Containers

FreeBSD networking and containers (Jails) stacks are very mature and provide lots of useful features … yet for some reason these features are not properly advertised by the FreeBSD project … or not even documented at all. I remember when Solaris was still under Sun before ‘fatal’ 2008 Oracle acquisition and one of the advertised Solaris features were its networking capabilities – along with virtual switches etc. that were administrated with the ipadm(1M) and dladm(1M) commands. FreeBSD – while having technologies like Netgraph or Jails lightweight containers – along with VNET Jails that have full independent of the host virtual network stack … are almost not advertised at all. The VNET Jails – while being production ready and used by thousands of sysadmins for about a decade – are still not documented in the FreeBSD Handbook or FreeBSD FAQ at all … you will not be able to find a single VNET mention in the FreeBSD Handbook. Even the FreeBSD Man Pages like jail.conf(5) does not mention it – only jail(8)partially mentions VNET feature.

There are however two FreeBSD Books dedicated to Jails … one is free – FreeBSD Jails Using VNETs (from 2020) and one is non-free – FreeBSD Mastery – Jails (from 2019).

The Table of Contents for this article is:

  • FreeBSD Host Setup
  • Classic Jails
  • VNET Jails
  • Thin Provisioning Jails
  • Single Process Jails
  • Removing Jails
  • Summary

This guide aims to make VNET Jails networking little closer and simple. While one can use Netgraph bridge for this purpose – we will use the simpler and more obvious classic network bridge supported by if_bridge(4) driver on FreeBSD. I also encourage you to check the FreeBSD Handbook – Jails chapter.

jails

FreeBSD Host Setup

The first thing we will do is to prepare the host networking. This is the typical static network configuration on FreeBSD with single IPv4 IP address without any VLANs in the /etc/rc.conf file. We will also enable the Jails subsystem.

host # cat /etc/rc.conf
# NETWORK
  hostname="host"
  ifconfig_em0="inet 20.0.0.20/24"
  defaultrouter="20.0.0.1"

# JAILS
  jail_enable="YES"
  jail_parallel_start="YES"
  jail_list="classic vnet shadow"

With that setup – your classic Jails will be able to connect to the outside World.

It can be visualized more or less like that.

              +--------+    +-----------------+
              |GATEWAY |    |20.0.0.20    HOST|
(Internet)<==>|        |<==>|em0              |
              |20.0.0.1|    |                 |
              +--------+    +-----------------+

Below we will create a classic Jail for a start. Each such setup requires certain decisions to be made – one of them will be using /jail as our Jails root.

We will create one with ZFS datasets now.

host # zfs create -o mountpoint=/jail -p zroot/jail
host # zfs create                     -p zroot/jail/BASE
host # zfs create                     -p zroot/jail/classic

I have also created the /jail/BASE and /jail/classic dirs. The first one will be used as a placeholder for various FreeBSD versions *-base.txz files. The latter will be used for our first ‘classic’ FreeBSD Jail.

host # find /jail -maxdepth 1
/jail
/jail/BASE
/jail/classic

Classic Jails

I will be using FreeBSD 13.2-RELEASE for the host system so this is the Jail version I would use for ‘classic’ Jail.

host # fetch -o /jail/BASE/13.2-RELEASE-base.txz http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/13.2-RELEASE/base.txz

The FreeBSD host can run any other FreeBSD version in a Jail as long as its not newer then the host system version. That means that while we use FreeBSD 13.2-RELEASE you are able to run and consolidate a farm of any older FreeBSD versions – along with legendary 4.11-RELEASE … or one of the most problematic 5.0-RELEASE that was the first one that introduced the SMP with its M:N threading model.

We will now create the ‘classic’ Jail.

host # tar -xf /jail/BASE/13.2-RELEASE-base.txz -C /jail/classic --unlink

… and now its configuration. Usually the jail.conf(5) file is used for that … but as you grow more Jails it becomes less practical to scroll through this file to ‘find’ your desired Jail that you want to modify. This is where the /etc/jail.conf.d dir comes handy. You will be able to place each Jail config as /etc/jail.conf.d/JAILNAME.conf file. This is what we will use here – leaving /etc/jail.conf file empty or non-existent.

This is the config we will use for the ‘classic’ Jail.

host # cat /etc/jail.conf.d/classic.conf
classic {
  # STARTUP/LOGGING
    exec.start = "/bin/sh /etc/rc";
    exec.stop  = "/bin/sh /etc/rc.shutdown";
    exec.consolelog = "/var/log/jail_console_${name}.log";

  # PERMISSIONS
    allow.raw_sockets;
    exec.clean;

  # PATH/HOSTNAME
    path = "/jail/${name}";
    host.hostname = "${name}";

  # NETWORK
    ip4.addr = 20.0.0.50;
    interface = em0;
}

Now we will start that Jail.

host # service jail start classic
Starting jails: classic.

host # jls
   JID  IP Address      Hostname                      Path
    31  20.0.0.50       classic                       /jail/classic

Our ‘classic’ Jail successfully started.

The 20.0.0.50 IP address was added to the host em0 interface as shown below.

host # ifconfig em0
em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
        ether 08:00:27:09:cc:a8
        inet 20.0.0.20/24 broadcast 20.0.0.255
        inet 20.0.0.50/32 broadcast 20.0.0.50
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

The ‘classic’ Jails networking can be visualized like that.

              +--------+    +-----------------------------+
              |GATEWAY |    |20.0.0.20                HOST|
(Internet)<==>|        |<==>|em0          +-------------+ |
              |20.0.0.1|    |20.0.0.50<==>|____jail0____| |
              +--------+    |20.0.0.51<==>|____jail1____| |
                            |(.......)<==>|    (...)    | |
                            |             +-------------+ |
                            +-----------------------------+

We can also ping(8) the ‘classic’ Jail IP address from the host system.

host # ping -c 3 20.50
PING 20.50 (20.0.0.50): 56 data bytes
64 bytes from 20.0.0.50: icmp_seq=0 ttl=64 time=0.114 ms
64 bytes from 20.0.0.50: icmp_seq=1 ttl=64 time=0.049 ms
64 bytes from 20.0.0.50: icmp_seq=2 ttl=64 time=0.046 ms

--- 20.50 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.046/0.069/0.114/0.031 ms

Lets login into the ‘classic’ Jail.

host # jls
   JID  IP Address      Hostname                      Path
     1  20.0.0.50       classic                       /jail/classic

host # jexec classic

root@classic:/ # hostname
classic

Inside the ‘classic’ Jail we are able to ping(8) the host gateway.

root@classic:/ # ping -c 3 20.1
PING 20.1 (20.0.0.1): 56 data bytes
64 bytes from 20.0.0.1: icmp_seq=0 ttl=255 time=0.083 ms
64 bytes from 20.0.0.1: icmp_seq=1 ttl=255 time=0.314 ms
64 bytes from 20.0.0.1: icmp_seq=2 ttl=255 time=0.256 ms

--- 20.1 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.083/0.218/0.314/0.098 ms

We can also reach outside World popular DNS public servers.

root@classic:/ # nc -v -u 1.1.1.1 53
Connection to 1.1.1.1 53 port [udp/domain] succeeded!
^C

Lets configure one of those DNS servers for our ‘classic’ Jail and test it.

root@classic:/ # echo nameserver 1.1.1.1 > /etc/resolv.conf

root@classic:/ # drill freebsd.org | grep '^[^;]'
freebsd.org.    240     IN      A       96.47.72.84

Seems to work properly. Lets enable sshd(8) service on it.

root@classic:/ # service sshd enable
sshd enabled in /etc/rc.conf

root@classic:/ # service sshd start
Generating RSA host key.
3072 SHA256:mSVNDUSi14S+GiaFJgNHNLCqQi6ndFG9JaSyA/wev1k root@classic (RSA)
Generating ECDSA host key.
256 SHA256:Hij315+3C/IMVJ1RX+hNJynGtVSU7ALYN0AS9/lxpJY root@classic (ECDSA)
Generating ED25519 host key.
256 SHA256:qzQnJCHjhHB7jQzmimSLayBfOc3dkLzIVhmrL2r9qxM root@classic (ED25519)
Performing sanity check on sshd configuration.
Starting sshd.

root@classic:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sshd       15049 3  tcp4   20.0.0.50:22          *:*
root     sendmail   18689 3  tcp4   20.0.0.50:25          *:*
root     syslogd    84689 5  udp4   20.0.0.50:514         *:*

Seems to work properly. We will now try to login to it from other host on the 20.0.0.0/24 network.

laptop % ssh 20.50
The authenticity of host '20.0.0.50 (20.0.0.50)' can't be established.
ED25519 key fingerprint is SHA256:qzQnJCHjhHB7jQzmimSLayBfOc3dkLzIVhmrL2r9qxM.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '20.0.0.50' (ED25519) to the list of known hosts.
(root@20.0.0.50) Password for root@classic:
(root@20.0.0.50) Password for root@classic:
(root@20.0.0.50) Password for root@classic:
root@20.0.0.50: Permission denied (publickey,keyboard-interactive).

Works. I was not able to login as by default logins to root account are disabled.

VNET Jails

On the other hand the VNET Jails (with options VIMAGE in the kernel) are quite different beasts. While on the file/dir/config level they work the same – on the network part they are a lot different. They come with separate network stack and do not add their IP to the host network interface.

Also – the current configuration of the host system (as repeated below) would also not work for the VNET Jails network connectivity with the outside World.

host # cat /etc/rc.conf
# NETWORK
  hostname="host"
  ifconfig_em0="inet 20.0.0.20/24"
  defaultrouter="20.0.0.1"

To allow VNET Jails entry to the World outside of the host system we need to use – for example – if_bridge(4) interface – and move our host IP address there. Below is the not working host network configuration – VNET Jails will not be able to access outside World.

host # ifconfig
em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
        ether 08:00:27:09:cc:a8
        inet 20.0.0.20/24 broadcast 20.0.0.255
        inet 20.0.0.50/32 broadcast 20.0.0.50
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1/8
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>


host # netstat -Win -f inet
Name      Mtu Network            Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
em0         - 20.0.0.0/24        20.0.0.20           164406     -     -    91934     -     -
em0         - 20.0.0.50/32       20.0.0.50              207     -     -      168     -     -
lo0         - 127.0.0.0/8        127.0.0.1                6     -     -        6     -     -

host # route get 0
   route to: default
destination: default
       mask: default
    gateway: 20.0.0.1
        fib: 0
  interface: em0
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

This is how the rc.conf(5) file needs to look like now. This config will allow VNET Jails to leave host system and access outside World.

host # cat /etc/rc.conf

# NETWORK
  hostname="host"
  cloned_interfaces="bridge0"
  ifconfig_em0="up"
  ifconfig_bridge0="inet 20.0.0.20/24 up addm em0"
  defaultrouter="20.0.0.1"
  gateway_enable=YES

# JAILS
  jail_enable="YES"
  jail_parallel_start="YES"
  jail_list="classic"

… and this is how it looks in the ifconfig(8) and netstat(8) commands.

host # ifconfig
em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
        ether 08:00:27:09:cc:a8
        inet 20.0.0.50/32 broadcast 20.0.0.50
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1/8
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:b6
        inet 20.0.0.20/24 broadcast 20.0.0.255
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 1 priority 128 path cost 20000
        groups: bridge
        nd6 options=9<PERFORMNUD,IFDISABLED>

host # netstat -Win -f inet
Name      Mtu Network            Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
em0         - 20.0.0.50/32       20.0.0.50              207     -     -      168     -     -
lo0         - 127.0.0.0/8        127.0.0.1                6     -     -        6     -     -
bridge0     - 20.0.0.0/24        20.0.0.20           164576     -     -    92109     -     -

host # route get 0
   route to: default
destination: default
       mask: default
    gateway: 20.0.0.1
        fib: 0
  interface: bridge0
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

The ‘classic’ Jail IP address is still bound to the em0 interface while the host IP address has been moved to the bridge0 bridge. If needed you can also move all the ‘classic’ Jails IP addresses to the bridge0 interface but its not mandatory.

It can be visualized like that. Keep in mind that em0 and epair*a interfaces are members of bridge0 interface.

                            +--------------------------------------------+
                            |                        +-------------+ HOST|
                            |       em0 20.0.0.50<==>|____jail0____|     |
                            |      /    20.0.0.51<==>|____jail1____|     |
              +--------+    |     /     (.......)<==>|    (...)    |     |
              |GATEWAY |    |20.0.0.20               +-------------+     |
(Internet)<==>|        |<==>|bridge0                                     |
              |20.0.0.1|    |  \ \ \           +-----------------------+ |
              +--------+    |   \ \ epairXa<==>|epairXb 20.0.0.60 vnet0| |
                            |    \ \           +-----------------------+ |
                            |     \ epairYa<==>|epairYb 20.0.0.61 vnet1| |
                            |      \           +-----------------------+ |
                            |       (....)a<==>|(....)b (.......) (...)| |
                            |                  +-----------------------+ |
                            +--------------------------------------------+

Lets create now our first VNET Jail. The beginning is the same – we will extract base.txz file from the 13.2-RELEASE system.

host # zfs create -p zroot/jail/vnet

host # tar -xf /jail/BASE/13.2-RELEASE-base.txz -C /jail/vnet --unlink

We will now also need the VNET Jail config.

host # cat /etc/jail.conf.d/vnet.conf
vnet {
  # STARTUP/LOGGING
    exec.start = "/bin/sh /etc/rc";
    exec.stop  = "/bin/sh /etc/rc.shutdown";
    exec.consolelog = "/var/log/jail_console_${name}.log";

  # PERMISSIONS
    allow.raw_sockets;
    exec.clean;

  # PATH/HOSTNAME
    path = "/jail/${name}";
    host.hostname = "${name}";

  # VNET/VIMAGE
    vnet;
    vnet.interface = "${if}b";

  # NETWORKS/INTERFACES
    $id = "60";
    $ip = "20.0.0.${id}/24";
    $gw = "20.0.0.1";
    $br = "bridge0";
    $if = "epair${id}";

  # ADD TO bridge0 INTERFACE
    exec.prestart += "ifconfig ${if} create up";
    exec.prestart += "ifconfig ${if}a up descr jail:${name}";
    exec.prestart += "ifconfig ${br} addm ${if}a up";
    exec.start    += "ifconfig ${if}b ${ip} up";
    exec.start    += "route add default ${gw}";
    exec.poststop += "ifconfig ${if}a destroy";
}

Lets now start the VNET Jail.

host # service jail start vnet
Starting jails: vnet.

host # jls
   JID  IP Address      Hostname                      Path
     1  20.0.0.50       classic                       /jail/classic
     2                  vnet                          /jail/vnet

One thing to notice here is that the jls(8) tool does not show the VNET Jails IP addresses.

You can of course overcome that limitation with jexec(8) – but IMHO after 10+ years of VNET Jails being production ready its PITA to say the least.

host # jexec vnet ifconfig | grep 'inet '
        inet 127.0.0.1/8
        inet 20.0.0.60/24 broadcast 20.0.0.255

Lets now try our VNET Jail network connectivity to the outside World.

host # jexec vnet

root@vnet:/ # ping -c 3 20.1
PING 20.1 (20.0.0.1): 56 data bytes
64 bytes from 20.0.0.1: icmp_seq=0 ttl=255 time=0.853 ms
64 bytes from 20.0.0.1: icmp_seq=1 ttl=255 time=0.474 ms
64 bytes from 20.0.0.1: icmp_seq=2 ttl=255 time=1.355 ms

--- 20.1 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.474/0.894/1.355/0.361 ms

root@vnet:/ # nc -v -u 1.1.1.1 53
Connection to 1.1.1.1 53 port [udp/domain] succeeded!
^C

Works as desired.

This is how the networking looks like inside the VNET Jail.

root@vnet:/ # ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
        inet 127.0.0.1/8
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
epair60b: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 02:47:e8:2e:6b:0b
        inet 20.0.0.60/24 broadcast 20.0.0.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

root@vnet:/ # netstat -Win -f inet
Name       Mtu Network            Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
lo0          - 127.0.0.0/8        127.0.0.1             2848     -     -     2848     -     -
epair60b     - 20.0.0.0/24        20.0.0.60                5     -     -        9     -     -

root@vnet:/ # route get 0
   route to: default
destination: default
       mask: default
    gateway: 20.0.0.1
        fib: 0
  interface: epair60b
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

Same as with ‘classic’ Jail – we will enable the sshd(8) service.

root@vnet:/ # service sshd enable
sshd enabled in /etc/rc.conf

root@vnet:/ # service sshd start
Generating RSA host key.
3072 SHA256:n9sGBV1bmz3bT4+qKuFE5fZHjnBcYlOMxXCq98z7/r0 root@vnet (RSA)
Generating ECDSA host key.
256 SHA256:A+gDtzkkrhNnGRGR4Yf27cqME8/NZk5NCHrxwyEO9oM root@vnet (ECDSA)
Generating ED25519 host key.
256 SHA256:aojc9Kbyd32HkllG9+noKL8GvKMjObuLrUNiq24+OFk root@vnet (ED25519)
Performing sanity check on sshd configuration.
Starting sshd.

root@vnet:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sshd       37145 4  tcp4   *:22                  *:*
root     sendmail   99914 4  tcp4   127.0.0.1:25          *:*
root     syslogd    71123 6  udp4   *:514                 *:*

We will try to connect to it from a system other then the host.

laptop % ssh 20.60
The authenticity of host '20.0.0.60 (20.0.0.60)' can't be established.
ED25519 key fingerprint is SHA256:aojc9Kbyd32HkllG9+noKL8GvKMjObuLrUNiq24+OFk.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '20.0.0.60' (ED25519) to the list of known hosts.
(root@20.0.0.60) Password for root@vnet:
(root@20.0.0.60) Password for root@vnet:
(root@20.0.0.60) Password for root@vnet:
root@20.0.0.60: Permission denied (publickey,keyboard-interactive).

Same as with the ‘classic’ Jail – we will not login with root user as its disabled by default – but that is not the task of this exercise.

Thin Provisioning Jails

While this will not be possible on the UFS filesystem – the ZFS allows that without any hassle.

We will now create a template for the ‘classic’ Jails on ZFS.

host # zfs snapshot zroot/jail/classic@template

The only things we did with the ‘classic’ Jail was to setup DNS server in /etc/resolv.conf file and we enabled sshd(8) service. We generally want that from any of our future Jails – so its a good candidate for a template Jail.

As we created the classic@template ZFS snapshot – we can not create unlimited thin provisioning ‘classic’ Jails from it. Just make sure to create needed /etc/jail.conf.d Jail config files.

Lets create a new thin provisioning Jail from it now.

host # zfs list -t snapshot
NAME                          USED  AVAIL     REFER  MOUNTPOINT
zroot/jail/classic@template   224K      -      503M  -

host # zfs clone zroot/jail/classic@template zroot/jail/shadow

host # zfs list -t all
NAME                          USED  AVAIL     REFER  MOUNTPOINT
zroot                        3.31G  15.6G       96K  none
zroot/ROOT                    700M  15.6G       96K  none
zroot/ROOT/default            699M  15.6G      699M  /
zroot/jail                   1.86G  15.6G      194M  /jail
zroot/jail/BASE               191M  15.6G      191M  /jail/BASE
zroot/jail/classic            503M  15.6G      503M  /jail/classic
zroot/jail/classic@template   224K      -      503M  -
zroot/jail/shadow               8K  15.6G      503M  /jail/shadow
zroot/jail/vnet               503M  15.6G      503M  /jail/vnet
zroot/tmp                      96K  15.6G       96K  /tmp
zroot/usr                     780M  15.6G       96K  /usr
zroot/usr/home                 96K  15.6G       96K  /usr/home
zroot/usr/ports                96K  15.6G       96K  /usr/ports
zroot/usr/src                 780M  15.6G      780M  /usr/src
zroot/var                     684K  15.6G       96K  /var
zroot/var/audit                96K  15.6G       96K  /var/audit
zroot/var/crash                96K  15.6G       96K  /var/crash
zroot/var/log                 204K  15.6G      204K  /var/log
zroot/var/mail                 96K  15.6G       96K  /var/mail
zroot/var/tmp                  96K  15.6G       96K  /var/tmp

host # zfs list -r -o space zroot/jail
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot/jail          15.6G  1.88G        0B    212M             0B      1.67G
zroot/jail/BASE     15.6G   191M        0B    191M             0B         0B
zroot/jail/classic  15.6G   503M      260K    503M             0B         0B
zroot/jail/nfsd     15.6G   513M        0B    513M             0B         0B
zroot/jail/shadow   15.6G   392K        0B    392K             0B         0B
zroot/jail/vnet     15.6G   505M        0B    505M             0B         0B

host # cp /etc/jail.conf.d/classic.conf /etc/jail.conf.d/shadow.conf

host # sed -i '' -e 's|classic|shadow|g' -e 's|20.0.0.50|20.0.0.51|g' /etc/jail.conf.d/shadow.conf

host # cat /etc/jail.conf.d/shadow.conf
shadow {
  # STARTUP/LOGGING
    exec.start = "/bin/sh /etc/rc";
    exec.stop  = "/bin/sh /etc/rc.shutdown";
    exec.consolelog = "/var/log/jail_console_${name}.log";

  # PERMISSIONS
    allow.raw_sockets;
    exec.clean;

  # PATH/HOSTNAME
    path = "/jail/${name}";
    host.hostname = "${name}";

  # NETWORK
    ip4.addr = 20.0.0.51;
    interface = em0;
}

host # service jail start shadow
Starting jails: shadow.

host # jls
   JID  IP Address      Hostname                      Path
     1  20.0.0.50       classic                       /jail/classic
     2                  vnet                          /jail/vnet
     3  20.0.0.51       shadow                        /jail/shadow

host # netstat -Win -f inet
Name      Mtu Network            Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
em0         - 20.0.0.50/32       20.0.0.50             2854     -     -     2746     -     -
em0         - 20.0.0.51/32       20.0.0.51                3     -     -        0     -     -
lo0         - 127.0.0.0/8        127.0.0.1                6     -     -        6     -     -
bridge0     - 20.0.0.0/24        20.0.0.20           195544     -     -   150555     -     -

While the ‘classic’ Jail uses about 500 MB of space the Thin Provisioning shadow Jail uses less then 1 MB of space.

We can now test its sshd(8) daemon connection.

laptop % ssh 20.51
The authenticity of host '20.0.0.51 (20.0.0.51)' can't be established.
ED25519 key fingerprint is SHA256:qzQnJCHjhHB7jQzmimSLayBfOc3dkLzIVhmrL2r9qxM.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '20.0.0.51' (ED25519) to the list of known hosts.
(root@20.0.0.51) Password for root@shadow:
(root@20.0.0.51) Password for root@shadow:
(root@20.0.0.51) Password for root@shadow:
root@20.0.0.51: Permission denied (publickey,keyboard-interactive).

Yep. Works as it should.

You can create a template for VNET Jail in the same manner.

Single Process Jails

The Docker is mostly know from its single process execution. You choose a process/daemon that you want to run isolated – copy it into the Docker container with its dependencies and you start it.

Exactly the same can be achieved with FreeBSD Jails containers.

For the purposes of this guide we will execute /bin/sh POSIX shell interpreter in a separated FreeBSD Jail. To omit copying additional dependency files we will use the statically compiled version from the well known FreeBSD Rescue System.

host # mkdir -p /jail/shell/dev

host # cp /rescue/sh /rescue/hostname /jail/shell/

host # jail -n shell \
            -c path=/jail/shell \
               mount.devfs \
               host.hostname=shell \
               ip4.addr=20.0.0.111 \
               command=/sh

shell # /hostname
shell

shell # /sh
Cannot read termcap database;
using dumb terminal settings.

shell # for I in 1 2 3; do echo ${I}; done
1
2
3

shell # echo /*
/dev /hostname /sh

You can login to the host system from other SSH session to see that Thin Provisioning shell Jail is running among our other FreeBSD Jails.

host # jls
   JID  IP Address      Hostname                      Path
     1  20.0.0.50       classic                       /jail/classic
     2                  vnet                          /jail/vnet
     3  20.0.0.51       shadow                        /jail/shadow
     4  20.0.0.111      shell                         /jail/shell

Removing Jails

For that purpose we will remove the ‘classic’ Jail.

We will first stop our Jail.

host # service jail stop classic
host # rm -rf /jail/classic
rm: /jail/classic/var/empty: Operation not permitted
rm: /jail/classic/var: Directory not empty
rm: /jail/classic/libexec/ld-elf32.so.1: Operation not permitted
rm: /jail/classic/libexec/ld-elf.so.1: Operation not permitted
rm: /jail/classic/libexec: Directory not empty
rm: /jail/classic/lib/libthr.so.3: Operation not permitted
rm: /jail/classic/lib/libcrypt.so.5: Operation not permitted
rm: /jail/classic/lib/libc.so.7: Operation not permitted
rm: /jail/classic/lib: Directory not empty
(...)

Wait … you are root and you can not delete files? πŸ™‚ This is just another FreeBSD feature. The File Flags. They can also be used with FreeBSD Secure Levels feature. By default some files and directories are marked with some of these flags. To be able to remove these files you first need to remove these flags. We can do that with chflags(1) command.

host # chflags -R 0 /jail/classic
host # rm -rf /jail/classic
host # echo ${?}
0

Now as the File Flags have been removed we can delete all files and dirs with usual rm(1) command. As our ‘classic’ Jail files are gone we should also delete the Jail config at /etc/jail.conf.d/classic.conf file.

host # rm -f /etc/jail.conf.d/classic.conf

Summary

I am not sure what else I should add here … but I am sure that you will let me know in the comments section πŸ™‚

Regards.

EOF

Realtek RTL8188CUS – USB 802.11n WiFi Review

When using FreeBSD on a new laptop you sometimes find out that the WiFi chip that it came with is not supported … or not yet supported in RELEASE version and support exists in CURRENT development version that you do not want to use.

This is where Realtek RTL8188CUS chip comes hand.

realtek

Its used in many appliances and products but we are interested in its small USB WiFi version that is really small.

The Realtek company even got Taiwan Green Classics Award 2011 for their 802.11b/g/n 2.4GHz 1T1R WLAN Single Chip Controller (RTL8188CE/RTL8188CUS) on 2011 year when it was introduced.

chip

chip-look

Its not very powerful as it comes with 1×1 antennas and 802.11n support – yes only single antenna. 150Mbps at most.

Its also very small and almost does not stick out of the laptop.

chip-space

When connected it also gives subtle little dim light.

chip-light

FreeBSD

I will now show you how it works on FreeBSD. This is for 12.2-RELEASE version but it worked the same for 11.1-RELEASE 3 years ago.

My ThinkPad W520 laptop already has Intel 6300 with 3×3 antennas and 802.11n standard WiFi card supported by iwn(4) driver.

# sysctl net.wlan.devices
net.wlan.devices: iwn0

We will now attach Realtek RTL8188CUS chip and will check whats coming in dmesg(8) command.

# dmesg
(...)
ugen2.3:  at usbus2
rtwn0 on uhub4
rtwn0:  on usbus2
rtwn0: MAC/BB RTL8188CUS, RF 6052 1T1R

… and some more information from usbconfig(8) command.

# usbconfig
(...)
ugen2.3:  at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)

# usbconfig -d 2.3 show_ifdrv
ugen2.3:  at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)
ugen2.3.0: rtwn0: 

Its now listed as rtwn0 as its supported by the rtwn(4) driver on FreeBSD.

# sysctl net.wlan.devices
net.wlan.devices: rtwn0 iwn0

Lets connect to some wireless network with this Realtek chip. I will create wlan1 device as wlan0 is already taken by the other Intel 6300 card.

# ifconfig wlan1 create wlandev rtwn0

# ifconfig wlan1
wlan1: flags=8802<broadcast,simplex,multicast> metric 0 mtu 1500
        ether 00:1d:43:21:2d:1c
        groups: wlan
        ssid "" channel 1 (2412 MHz 11b)
        regdomain FCC country US authmode OPEN privacy OFF txpower 30 bmiss 7
        scanvalid 60 wme bintval 0
        parent interface: rtwn0
        media: IEEE 802.11 Wireless Ethernet autoselect (autoselect)
        status: no carrier
        nd6 options=21<performnud,auto_linklocal>

# wpa_passphrase WIFINETWORK PASSWORD >> /etc/wpa_supplicant.conf

# wpa_supplicant -i wlan1 -c /etc/wpa_supplicant.conf
Successfully initialized wpa_supplicant
wlan1: Trying to associate with d8:07:b8:b8:f4:81 (SSID='wireless' freq=2442 MHz)
wlan1: Associated with d8:07:b6:b8:f4:81
wlan1: WPA: Key negotiation completed with d8:07:b6:b8:f4:81 [PTK=CCMP GTK=CCMP]
wlan1: CTRL-EVENT-CONNECTED - Connection to d8:07:b6:b8:f4:81 completed [id=40 id_str=]
^Z // HIT THE [CTRL]+[Z] KEYS HERE
zsh: suspended  wpa_supplicant -i wlan1 -c /etc/wpa_supplicant.conf

# bg
[1]  + continued  wpa_supplicant -i wlan1 -c /etc/wpa_supplicant.conf

#

We should now have network LAYER 2 connected and wpa_supplicant(8) should be running in a background and wlan1 interface should have associated status.

# ps ax | grep wpa_supplicant
48693  4  S        0:00.43 wpa_supplicant -i wlan1 -c /etc/wpa_supplicant.conf
50687  4  S+       0:00.00 grep --color wpa_supplicant

# ifconfig wlan1
wlan1: flags=8843<up,broadcast,running,simplex,multicast> metric 0 mtu 1500
        ether 00:1d:43:21:2d:1c
        groups: wlan
        ssid wireless channel 7 (2442 MHz 11g ht/20) bssid d8:07:b6:b8:f4:81
        regdomain FCC country US authmode WPA2/802.11i privacy ON
        deftxkey UNDEF AES-CCM 2:128-bit txpower 30 bmiss 7 scanvalid 60
        protmode CTS ht20 ampdulimit 64k ampdudensity 4 shortgi -stbc -ldpc
        -uapsd wme roaming MANUAL
        parent interface: rtwn0
        media: IEEE 802.11 Wireless Ethernet MCS mode 11ng
        status: associated
        nd6 options=29<performnud,ifdisabled,auto_linklocal>

Lets add LAYER 3 with IP address using dhclient(8) command.

# dhclient wlan1
DHCPDISCOVER on wlan1 to 255.255.255.255 port 67 interval 3
DHCPOFFER from 10.0.0.1
DHCPREQUEST on wlan1 to 255.255.255.255 port 67
DHCPACK from 10.0.0.1
bound to 10.0.0.9 -- renewal in 3600 seconds.

We just got the 10.0.0.9 IP address.

One last step with DNS and we will test the connection with ping(8) command.

# echo nameserver 1.1.1.1 > /etc/resolv.conf

# ping -c 3 freebsd.org
PING freebsd.org (96.47.72.84): 56 data bytes
64 bytes from 96.47.72.84: icmp_seq=0 ttl=50 time=119.870 ms
64 bytes from 96.47.72.84: icmp_seq=1 ttl=50 time=119.371 ms
64 bytes from 96.47.72.84: icmp_seq=2 ttl=50 time=119.128 ms

--- freebsd.org ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 119.128/119.456/119.870/0.309 ms

Works.

FreeBSD Benchmark

I next tested the performance of this simple single antenna Realtek chip using NFS large file transfer in thunar(1) file manager.

not-great-not-terrible

The results are not that bad but not great either.

The file copy from LAN server attached directly to WiFi router to my laptop was about 2.9 MB/s fast. I was 5 meters away from the router.

server  ==LAN==>  router  ==WiFi==>  laptop  @  2.9 MB/s

The file copy from laptop using WiFi to LAN server attached directly to WiFi router was about 2.6 MB/s fast. Still about 5 meters away from the router.

laptop  ==WiFi==>  router  ==LAN==>  server  @  2.6 MB/s

That is 23.2 Mbps and 20.8 Mbps respectively. Really far from theoretical single antenna 802.11n 150 Mbps transfer … its probably fault of the FreeBSD wireless stack.

I would say that its sufficient for Internet browsing but using local LAN resources over NFS can be painful.

On the contrary my Intel 6300 WiFi card does 5.5 MB/s on the laptop-to-router-to-server copy and 10.5 MB/s on the server-to-router-to-laptop road. That is 44 Mbps and 84 Mbps respectively instead of 450 Mbps theoretical maximum. Both the Intel 6300 and my router have 3×3 antennas.

Would love to see these number closer to 30 MB/s …

Raspberry Pi

One of the other benefit of the Realtek RTL8188CUS chip is that it works very well on small Raspberry Pi boxes. Personally I have tested it on the Raspberry Pi 2B and it worked like a charm.

rpi

Price

This chip is also great when it comes to price. Products based on this chip are available everywhere. They are on EBAY. They are on ALIEXPRESS. And it costs as low as $2.50 in many cases.

Sometimes the delivery costs more then the product itself πŸ™‚

Enjoy.

UPDATE 1 – Middle Ages

Reddit user Yaazkal user from Reddit just reminded me thatΒ  rtwn(4) driver on FreeBSD still does not support 802.11n protocol.

It’s still in the middle ages of 802.11g transfers.

FreeBSD Desktop – Part 12 – Configuration – Openbox

Time to cut the bullshit and actually make some real configuration. In today’s article of the FreeBSD Desktop series I will describe how to configure the Openbox window manager.

You may want to check other articles in the FreeBSD Desktop series on the FreeBSD Desktop – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Features

Comparing to earlier articles in the series it will be HUGE, sorry for that. I could cut it into smaller parts but that would require editing of the Openbox configuration, its shortcuts and menus over and over again, so for the sake of simplicity its better to put it all at once. As it is as that big there will be mistakes, but I will fix them ASAP.

Here is the list of all features that will be available in this Openbox configuration.

  • Nice looking Openbox theme.
  • Openbox Menu (static) with nice looking icons.
  • Openbox Menu for FreeBSD top(1)/ps(1) commands and config files/logs.
  • Openbox Menu for FreeBSD default sound output.
  • Openbox Menu and shortcuts for FreeBSD sound volume increase/decrease.
  • Openbox Menu for FreeBSD for CPU frequency scaling.
  • Openbox Menu for FreeBSD network management with network.sh script.
  • Openbox Menu for screenshots/wallpapers management.
  • Openbox Menu for Recent files.
  • Random wallpaper handling.
  • Random xterm(1) theme at every terminal start with lost of great themes.
  • Openbox shortcuts and script for Aero Snap like behavior.
  • Openbox Dmenu shortcuts and integration.
  • Openbox configured with nice fonts.
  • Openbox shortcuts for most important tasks.
  • Warning for low battery on laptop.
  • I probably forgot about dozen other features – let me know in comments πŸ™‚

Here is how the Openbox menus and window borders and window switching would look like.

openbox-alt-tab

openbox-menu

Here are all the files with needed configuration.

Doas

To make most scripts work Your user (vuk in the series) needs to be in the wheel, operator and network groups and doas(1) (sudo(8) equivalent) needs to be installed and configured in the following way.

# pkg install doas

# pw groupmod wheel    -m vuk
# pw groupmod operator -m vuk
# pw groupmod network  -m vuk

# cat /usr/local/etc/doas.conf
permit nopass :wheel as root

permit nopass :network as root cmd ifconfig
permit nopass :network as root cmd dhclient
permit nopass :network as root cmd umount
permit nopass :network as root cmd wpa_supplicant
permit nopass :network as root cmd ppp
permit nopass :network as root cmd killall args -9 dhclient
permit nopass :network as root cmd killall args -9 wpa_supplicant
permit nopass :network as root cmd killall args -9 ppp
permit nopass :network as root cmd cat args /etc/ppp/ppp.conf
permit nopass :network as root cmd /etc/rc.d/netif args onerestart
permit nopass :network as root cmd tee args /etc/resolv.conf
permit nopass :network as root cmd tee args -a /etc/resolv.conf

Scripts

In this post I attach scripts I have made and used for about 13 years since I started to use FreeBSD on the desktop. Download them all in the scripts.tar.gz file and unpack them into the ~/scripts to make it look like that.

% find scripts | sort
scripts/__openbox_cpufreq.sh
scripts/__openbox_current_wallpaper.sh
scripts/__openbox_delete_wallpaper.sh
scripts/__openbox_dmenu.sh
scripts/__openbox_edit_screenshot.sh
scripts/__openbox_edit_wallpaper_gimp.sh
scripts/__openbox_freebsd_sound.sh
scripts/__openbox_lock_zzz.sh
scripts/__openbox_lock.sh
scripts/__openbox_recent.sh
scripts/__openbox_reload_wallpaper.sh
scripts/__openbox_restart_conky.sh
scripts/__openbox_restart_dzen2.sh
scripts/__openbox_restart_plank.sh
scripts/__openbox_restart_tint2.sh
scripts/__openbox_show_screenshot.sh
scripts/__openbox_stats_ps_KILLALL.sh
scripts/__openbox_stats_top_cpu_KILL.sh
scripts/__openbox_stats_top_cpu_RENICE.sh
scripts/__openbox_stats_top_mem_KILL.sh
scripts/__openbox_stats_top_mem_RENICE.sh
scripts/aero-snap.sh
scripts/fc-cache.sh
scripts/firefox-clean.sh
scripts/network.sh
scripts/random_wallpaper.sh
scripts/shot.sh
scripts/xterm.sh
scripts/desktop-kill-shit.sh
scripts/desktop-battery-warning.sh

Make sure they remain executable.

% chmod +x ~/scripts/*

To make them work properly add ~/scripts into the ${PATH} variable at the beginning of the ~/.xinitrc file.

# PATH TO SCRIPTS
  export PATH=${PATH}:~/scripts


All of my scripts have this ‘mysterious’ line at the end. Its for statistics to check which scripts are run when (or it at all to which ones to delete).

echo '1' >> ~/scripts/stats/$( basename ${0} )

Thus it is needed to create the ‘stats’ directory.

% mkdir -p ~/scripts/stats

I have implemented that about two months ago and here are the results.

% wc -l ~/scripts/stats/* | sort -n
       1 /home/vermaden/scripts/stats/__openbox_show_screenshot.sh
       2 /home/vermaden/scripts/stats/__openbox_cpufreq.sh
       2 /home/vermaden/scripts/stats/__openbox_current_wallpaper.sh
       2 /home/vermaden/scripts/stats/__openbox_fullscreen.sh
       4 /home/vermaden/scripts/stats/__openbox_restart_dzen2.sh
       4 /home/vermaden/scripts/stats/dzen2-fifo.sh
       5 /home/vermaden/scripts/stats/__openbox_dmenu.sh
       5 /home/vermaden/scripts/stats/__openbox_restart_conky.sh
       5 /home/vermaden/scripts/stats/__openbox_restart_tint2.sh
       6 /home/vermaden/scripts/stats/__openbox_delete_wallpaper.sh
       7 /home/vermaden/scripts/stats/__openbox_freebsd_sound.sh
       8 /home/vermaden/scripts/stats/aero-snap.sh
      12 /home/vermaden/scripts/stats/__openbox_edit_screenshot.sh
      16 /home/vermaden/scripts/stats/__openbox_lock_zzz.sh
      16 /home/vermaden/scripts/stats/__openbox_lock.sh
      22 /home/vermaden/scripts/stats/shot.sh
      24 /home/vermaden/scripts/stats/network.sh
     214 /home/vermaden/scripts/stats/xterm.sh
     960 /home/vermaden/scripts/stats/random_wallpaper.sh
    2767 /home/vermaden/scripts/stats/desktop-battery-warning.sh
   13834 /home/vermaden/scripts/stats/desktop-kill-shit.sh
   17916 total

Of course I limited the output only to scripts that are available in this article, but be patient, more to come later πŸ™‚

Dependencies

To make these scripts work and generally to make all this setup work we will need these dependencies.

  • arandr
  • qt5ct
  • qtconfig-qt4
  • sakura
  • leafpad
  • geany
  • caja
  • thunar
  • libreoffice
  • galculator
  • pidgin
  • firefox
  • chrome
  • deadbeef
  • transmission-gtk
  • gnumeric
  • abiword
  • audacity
  • filezilla
  • midori
  • gimp
  • lupe
  • xvidcap
  • zenity
  • xterm
  • xrdb
  • scrot
  • feh
  • wmctrl
  • xdotool
  • viewnior
  • tint2
  • plank
  • dzen2
  • conky
  • mate-screensaver
  • xlockmore
  • gimp
  • dmenu
  • powerdxx
  • htop
  • galculator

To install them all with pkg(8) just type this line below.

# pkg install \
    geany caja thunar libreoffice galculator pidgin firefox chrome midori \
    abiword deadbeef transmission-gtk gnumeric  audacity filezilla zenity \
    gimp lupe recorder xvidcap  xterm xrdb scrot feh wmctrl xdotool tint2 \
    viewnior plank dzen2 conky mate-screensaver xlockmore powerdxx arandr \
    qt5ct gfontview galculator qtconfig qtconfig-qt4 sakura leafpad dmenu \
    htop 
   

I also assume that wallpapers will be kept under ~/gfx/wallpapers dir and screenshots made under ~/gfx/screenshots directory, so lets create them now.

% mkdir -p ~/gfx/wallpapers
% mkdir -p ~/gfx/screenshots

Crontab

Some of these scripts needs to be put into crontab(1) to work, here are their entries.

% crontab -l
# DESKTOP
  *     *     * * * ~/scripts/desktop-kill-shit.sh                                       1> /dev/null 2> /dev/null
  */5   *     * * * ~/scripts/desktop-battery-warning.sh
  */20  *     * * * ~/scripts/random_wallpaper.sh ~/gfx/wallpapers                       1> /dev/null 2> /dev/null
  12,0  *     * * * /usr/bin/find ~/.cache -mtime +10 -delete                            1> /dev/null 2> /dev/null
  0     */3   * * * /usr/bin/find ~/.local/share/Trash/files -mtime +1 -delete  1> /dev/null 2> /dev/null

Fonts

I use Ubuntu Mono font for the Openbox menus and Fira Sans font for the Openbox window bar titles, thus you will download them in the fonts.tar.gz file and extract them like that into the ~/.fonts directory, if if does not exists, create it.

% find .fonts
.fonts/fira-sans-bold-italic.otf
.fonts/fira-sans-bold.otf
.fonts/fira-sans-italic.otf
.fonts/fira-sans-regular.otf
.fonts/ubuntu-mono-bold-italic.ttf
.fonts/ubuntu-mono-bold.ttf
.fonts/ubuntu-mono-italic.ttf
.fonts/ubuntu-mono-regular.ttf

To make sure that Openbox will ‘see’ them you can verify it using the fc-match(1) command like below.

% fc-match 'Fira Sans'
fira-sans-regular.otf: "Fira Sans" "Regular"

% fc-match 'Ubuntu Mono'
ubuntu-mono-regular.ttf: "Ubuntu Mono" "Regular"

Openbox

Openbox consists mostly of two files.

  • ~/.config/openbox/menu.xml
  • ~/.config/openbox/rc.xml

There are also these two, but its pointless to use them as we set our environment and start our apps/daemons in the ~/.xinitrc file (with ~/.xsession symlink to it), but anyway.

  • ~/.config/openbox/autostart
  • ~/.config/openbox/environment

The icons for the Openbox menu are kept under ~/.config/openbox/icons directory.

Download whole Openbox configuration in the openbox.tar.gz file and unpack it into the ~/.config/openbox to make it look like that.

% find .config/openbox -maxdepth 1
.config/openbox
.config/openbox/rc.xml
.config/openbox/menu.xml
.config/openbox/icons
.config/openbox/environment
.config/openbox/autostart

Openbox Theme

The theme we will use at start is the Openbox Flat made by myself, I do not remember if I put it online on the https://www.box-look.org/ site but that does not matter. Grab it in the openbox-flat-theme.tar.gz file and unpack it like that into the ~/.themes directory, create it if it does not exists.

% find .themes/openbox_flat
.themes/openbox_flat
.themes/openbox_flat/openbox-3
.themes/openbox_flat/openbox-3/iconify.xbm
.themes/openbox_flat/openbox-3/XPM
.themes/openbox_flat/openbox-3/XPM/over.xpm
.themes/openbox_flat/openbox-3/XPM/close.xpm
.themes/openbox_flat/openbox-3/XPM/max.xpm
.themes/openbox_flat/openbox-3/XPM/stick.0.xpm
.themes/openbox_flat/openbox-3/XPM/min.xpm
.themes/openbox_flat/openbox-3/XPM/shade.xpm
.themes/openbox_flat/openbox-3/XPM/stick.1.xpm
.themes/openbox_flat/openbox-3/max.xbm
.themes/openbox_flat/openbox-3/close.xbm
.themes/openbox_flat/openbox-3/bullet.xbm
.themes/openbox_flat/openbox-3/shade.xbm
.themes/openbox_flat/openbox-3/themerc
.themes/openbox_flat/openbox-3/desk.xbm
.themes/openbox_flat/openbox-3/desk_toggled.xbm

Openbox FreeBSD Submenus

The ‘systemOpenbox submenu is for FreeBSD top(1)/ps(1) commands and config files/logs.

openbox-system.jpg

The ‘soundOpenbox submenu is for FreeBSD default sound output selection.

openbox-sound.jpg

The ‘recentOpenbox submenu is for Recent files.

openbox-recent.jpg

Check ‘screenshot:‘ and ‘wallpaper:‘ in the ‘x11Openbox submenu for screenshots/wallpapers management.

Check ‘cpu:‘ in the ‘utilitiesOpenbox submenu for FreeBSD for CPU frequency scaling.

Check ‘NETWORK:‘ in the ‘daemonsOpenbox submenu for FreeBSD network management with network.sh script.

Shortcuts

Lets start with the most basic ones. [SUPER] is the so called Windows key.

Shortcuts – Virtual Desktops

  • [ALT] + [F1] – switch to 1st virtual desktop.
  • [ALT] + [F2] – switch to 2nd virtual desktop.
  • [ALT] + [F3] – switch to 3rd virtual desktop.
  • [ALT] + [F4] – switch to 4th virtual desktop.
  • [SHIFT] + [ALT] + [F1] – move current window to 1st virtual desktop.
  • [SHIFT] + [ALT] + [F2] – move current window to 2nd virtual desktop.
  • [SHIFT] + [ALT] + [F3] – move current window to 3rd virtual desktop.
  • [SHIFT] + [ALT] + [F4] – move current window to 4th virtual desktop.

Shortcuts – Menus

  • [SUPER] + [SPACE] – show Openbox root menu.
  • [SUPER] + [ALT] + [SPACE] – show Openbox window list menu.
  • [ALT] + [SPACE] – show current window options menu (client menu).

Shortcuts – Window Management

  • [ALT] + [TAB] – cycle windows focus forward.
  • [SHIFT] + [ALT] + [TAB] – cycle windows focus backward.
  • [CTRL] + [ALT] + [Q] – close current window.
  • [CTRL] + [ALT] + [F] – put current window info fullscreen.
  • [ALT] + [Up] – shade current window.
  • [ALT] + [Down] – minimize current window.
  • [ALT] + [ESC] – send current window below all other windows.

Shortcuts – Advanced Aero Snap

  • [SUPER] + [Up] – move window to half of the screen from top.
  • [SUPER] + [Down] – move window to half of the screen from bottom.
  • [SUPER] + [Left] – move window to half of the screen from left.
  • [SUPER] + [Right] – move window to half of the screen from right.
  • [SUPER] + [CTRL] + [Up] – move window to top-left part of the screen.
  • [SUPER] + [CTRL] + [Down] – move window to bottom-left part of the screen.
  • [SUPER] + [ALT] + [Up] – move window to top-right part of the screen.
  • [SUPER] + [ALT] + [Down] – move window to bottom-right part of the screen.
  • [SUPER] + [ESC] – move window to center – but without fullscreen.

Shortcuts – Mouse

  • [Scroll Up] on Desktop – previous virtual desktop.
  • [Scroll Down] on Desktop – next virtual desktop.
  • [Scroll Up] on (unshaded) Window Titlebar – shade current window.
  • [Scroll Up] on (shaded) Window Titlebar – unshade current window.
  • [Middle Click] on Window Titlebar – send window to background.
  • [Right Click] on Window Titlebar – show window options menu (client menu).
  • [Left Click] on Window Titlebar Icon – show window options menu (client menu).
  • [Middle Click] on Window Titlebar Icon – close window.

Shortcuts – Various

  • [CTRL] + [SHIFT] + [ESC] – launch xterm(1) with htop(1) started with doas(1) for root provilages.
  • [SUPER] + [E] – start Explorer Caja primary file manager.
  • [SUPER] + [E] – start Thunar secondary file manager.
  • [SUPER] + [D] – show desktop – minimize all windows.
  • [SUPER] + [R] – launch dmenu(1) starter.
  • [SUPER] + [L] – lock the screen.
  • [ALT] + [SHIFT] + [SUPER] + [L] – lock the screen and go to sleep.
  • [CTRL] + [PrintScreen] – make screenshot of the whole screen.
  • [SHIFT] + [CTRL] + [PrintScreen] – make screenshot of current window (click without moving the mouse) or selection (select part of the screen).

Shortcuts – Volume

These two work from keyboard.

  • [SUPER] + [ALT] + [PageUp] – increase volume.
  • [SUPER] + [ALT] + [PageDown] – decrease volume.

These below with mouse.

For those who do not have mouse with buttons on the wheel like the Lenovo ThinkPad Precision Wireless Mouse (0B47163) for example, use [ALT] key with mouse scroll up/scroll down on the desktop to increase/decrease volume.

If you do have such mouse, then left on the wheel to decrease and right on the wheel to increase volume.

Random Wallpaper

The random wallpaper handling is done with the ~/scripts/random_wallpaper.sh script. Be sure to put some images into the ~/gfx/wallpapers directory to make it work and to configure crontab(1) properly as shown earlier.

Random xterm(1) Theme

To have random xterm(1) theme on every startup you need three things, the ~/.Xdefaults default config file which is used by xterm(1), the ~/scripts/xterm.sh script and the ~/.config/Xdefaults directory with xterm(1) themes. I gathered all these themes all over the Internet, only the VERMADEN and VERMADEN-OLD themes are created by me.

I have expanded this topic a lot more here –Β FreeBSD Desktop – Part 25 – Configuration – Random Terminal Theme – in dedicated article.

Little preview of some of the included xterm(1) themes.

openbox-xterm.jpg

To make xterm(1) icon look better you will also need icons.tar.gz file download and extract with the end result looking as follows.

% find .icons
.icons/vermaden/xterm.xpm

Download and extract the xterm.tar.gz file to make its contents look like that.

% find ~/.config/Xdefaults 
.config/Xdefaults
.config/Xdefaults/themes
.config/Xdefaults/themes/Xdefaults.theme.DARK.N0TCH2K
.config/Xdefaults/themes/Xdefaults.theme.DARK.MOLOKAI
.config/Xdefaults/themes/Xdefaults.theme.DARK.FRONTEND-DELIGHT
.config/Xdefaults/themes/Xdefaults.theme.DARK.GRUVBOX-DARK
.config/Xdefaults/themes/Xdefaults.theme.DARK.TWILIGHT
.config/Xdefaults/themes/Xdefaults.theme.DARK.MONOKAI-SODA
.config/Xdefaults/themes/Xdefaults.theme.DARK.IC-GREEN-PPL
.config/Xdefaults/themes/Xdefaults.theme.DARK.GRUVBOX-TILIX
.config/Xdefaults/themes/Xdefaults.theme.DARK.NEOPOLITAN
.config/Xdefaults/themes/Xdefaults.theme.DARK.LOVELACE
.config/Xdefaults/themes/Xdefaults.theme.DARK.ARTHUR
.config/Xdefaults/themes/Xdefaults.theme.DARK.VERMADEN
.config/Xdefaults/themes/Xdefaults.theme.DARK.3024NIGHT
.config/Xdefaults/themes/Xdefaults.theme.DARK.SOLARIZED
.config/Xdefaults/themes/Xdefaults.theme.DARK.NORD
.config/Xdefaults/themes/Xdefaults.theme.DARK.VERMADEN-OLD
.config/Xdefaults/themes/Xdefaults.theme.DARK.HIGHWAY
.config/Xdefaults/themes/Xdefaults.theme.DARK.HARPER
.config/Xdefaults/themes/Xdefaults.theme.DARK.FLATUI
.config/Xdefaults/themes/Xdefaults.theme.DARK.SPACEDUST
.config/Xdefaults/themes/Xdefaults.theme.DARK.EARTHSONG
.config/Xdefaults/themes/Xdefaults.theme.DARK.PALI
.config/Xdefaults/themes/Xdefaults.theme.DARK.ALIENBLOOD
.config/Xdefaults/themes/Xdefaults.theme.DARK.ELIC
.config/Xdefaults/themes/Xdefaults.theme.LIGHT.SOLARIZED-LIGHT
.config/Xdefaults/themes/Xdefaults.theme.DARK.ELEMENTARY
.config/Xdefaults/themes/Xdefaults.theme.DARK.ELEMENTAL
.config/Xdefaults/themes/Xdefaults.theme.DARK.FREYA

Thats a lot of information for one article, feel free to ask me for anything related or about things that I might forgot to put here.

UPDATE 1 – network.sh Integration

In other article I described how to manage various network sources with the network.sh script – FreeBSD Network Management with network.sh Script – available here.

Below is an example of integration of that network.sh script with Openbox window manager.

network.sh.openbox.menu.jpg

… and here is the code used in the ~/.config/openbox/menu.xml file.

network.sh.openbox.menu.code

Of course you can integrate network.sh script with almost anything – its just a command πŸ™‚

EOF

Distributed Object Storage with Minio on FreeBSD

Meet Minio.

minio-logo-arch-32

Free and open source distributed object storage server compatible with Amazon S3 v2/v4 API. Offers data protection against hardware failures using erasure code and bitrot detection. Supports highly available distributed setup. Provides confidentiality, integrity and authenticity assurances for encrypted data with negligible performance overhead. Both server side and client side encryption are supported. Below is the image of example Minio setup.

Web

The Minio identifies itself as the ZFS of Cloud Object Storage. This guide will show You how to setup highly available distributed Minio storage on the FreeBSD operating system with ZFS as backend for Minio data. For convenience we will use FreeBSD Jails operating system level virtualization.

Setup

The setup will assume that You have 3 datacenters and assumption that you have two datacenters in whose the most of the data must reside and that the third datacenter is used as a ‘quorum/witness’ role. Distributed Minio supports up to 16 nodes/drives total, so we may juggle with that number to balance data between desired datacenters. As we have 16 drives to allocate resources on 3 sites we will use 7 + 7 + 2 approach here. The datacenters where most of the data must reside have 7/16 ratio while the ‘quorum/witness’ datacenter have only 2/16 ratio. Thanks to built in Minio redundancy we may loose (turn off for example) any one of those machines and our object storage will still be available and ready to use for any purpose.

Jails

First we will create 3 jails for our proof of concept Minio setup, storage1 will have the ‘quorum/witness’ role while storage2 and storage3 will have the ‘data’ role. To distinguish commands I type on the host system and storageX Jail I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host # command

Command on the storageX Jail.

root@storageX:/ # command

First we will create the base Jails for our setup.

host # mkdir -p /jail/BASE /jail/storage1 /jail/storage2 /jail/storage3
host # cd /jail/BASE
host # fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE/base.txz
host # for I in 1 2 3; do echo ${I}; tar --unlink -xpJf /jail/BASE/base.txz -C /jail/storage${I}; done
1
2
3
host #

We will now add Jails configuration the the /etc/jail.conf file.

I have used my laptop for the Jail host. This is why Jail will configured to use the wireless wlan0 interface and 192.168.43.10X addresses.

host # for I in 1 2 3
do
  cat >> /etc/jail.conf << __EOF
storage${I} {
  host.hostname = storage${I}.local;
  ip4.addr = 192.168.43.10${I};
  interface = wlan0;
  path = /jail/storage${I};
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

__EOF
done
host #

Lets verify that /etc/jail.conf file is configured as desired.

host # cat /etc/jail.conf
storage1 {
  host.hostname = storage1.local;
  ip4.addr = 192.168.43.101;
  interface = wlan0;
  path = /jail/storage1;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage2 {
  host.hostname = storage2.local;
  ip4.addr = 192.168.43.102;
  interface = wlan0;
  path = /jail/storage2;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

storage3 {
  host.hostname = storage3.local;
  ip4.addr = 192.168.43.103;
  interface = wlan0;
  path = /jail/storage3;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

host #

Now we will start our Jails.

host # for I in 1 2 3; do service jail onestart storage${I}; done
Starting jails: storage1.
Starting jails: storage2.
Starting jails: storage3.

Lets see how they work.

host # jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.101  storage1.local                /jail/storage1
     2  192.168.43.102  storage2.local                /jail/storage2
     3  192.168.43.103  storage3.local                /jail/storage3

Now lets add DNS server so they will have Internet connectivity.

host # for I in 1 2 3; do echo nameserver 1.1.1.1 > /jail/storage${I}/etc/resolv.conf; done

We can now install Minio package.

host # for I in 1 2 3; do jexec storage${I} env ASSUME_ALWAYS_YES=yes pkg install -y minio; echo; done
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage1.local] Installing pkg-1.10.5...
[storage1.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage1.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage1.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage1.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage1.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage1.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage2.local] Installing pkg-1.10.5...
[storage2.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage2.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage2.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage2.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage2.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage2.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[storage3.local] Installing pkg-1.10.5...
[storage3.local] Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[storage3.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[storage3.local] Fetching packagesite.txz: 100%    6 MiB 637.1kB/s    00:10    
Processing entries: 100%
FreeBSD repository update completed. 31143 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        minio: 2018.03.19.19.22.06

Number of packages to be installed: 1

The process will require 22 MiB more space.
6 MiB to be downloaded.
[storage3.local] [1/1] Fetching minio-2018.03.19.19.22.06.txz: 100%    6 MiB 305.6kB/s    00:19    
Checking integrity... done (0 conflicting)
[storage3.local] [1/1] Installing minio-2018.03.19.19.22.06...
===> Creating groups.
Creating group 'minio' with gid '473'.
===> Creating users
Creating user 'minio' with uid '473'.
[storage3.local] [1/1] Extracting minio-2018.03.19.19.22.06: 100%

host #

Lets verify that Minio package has installed successfully.

host # for I in 1 2 3; do jexec storage${I} which minio; done
/usr/local/bin/minio
/usr/local/bin/minio
/usr/local/bin/minio
host #

Now we will configure /etc/hosts file.

root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF
root@storage1:/ # cat >> /etc/hosts << __EOF
192.168.43.101 storage1
192.168.43.102 storage2
192.168.43.103 storage3
__EOF

We will create directories for Minio data.

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done
host # for DIR in 1 2
do
  for I in 1
  do
    jexec storage${I} mkdir -p /data${DIR}
  done
done

Lets verify that that our data directories created successfully.

host # for I in 1 2 3
  do
    echo storage${I}
    jexec storage${I} ls -1 / | grep data
    echo
  done


storage1
data1
data2

storage2
data1
data2
data3
data4
data5
data6
data7

storage3
data1
data2
data3
data4
data5
data6
data7

Basic minio command example.

root@storage1:/ # minio
NAME:
  minio - Cloud Storage Server.

DESCRIPTION:
  Minio is an Amazon S3 compatible object storage server. Use it to store photos, videos, VMs, containers, log files, or any blob of data as objects.

USAGE:
  minio [FLAGS] COMMAND [ARGS...]

COMMANDS:
  server   Start object storage server.
  gateway  Start object storage gateway.
  update   Check for a new software update.
  version  Print version.
  
FLAGS:
  --config-dir value, -C value  Path to configuration directory. (default: "/root/.minio")
  --quiet                       Disable startup information.
  --json                        Output server logs and startup information in json format.
  --help, -h                    Show help.
  
VERSION:
  2018-03-19T19:22:06Z

Now we can generate the list of directories on servers to add as argument for Minio.

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo ":9000/data${DIR} \\"
  done
done | sort -n
http://storage1:9000/data1 \
http://storage1:9000/data2 \
http://storage2:9000/data1 \
http://storage2:9000/data2 \
http://storage2:9000/data3 \
http://storage2:9000/data4 \
http://storage2:9000/data5 \
http://storage2:9000/data6 \
http://storage2:9000/data7 \
http://storage3:9000/data1 \
http://storage3:9000/data2 \
http://storage3:9000/data3 \
http://storage3:9000/data4 \
http://storage3:9000/data5 \
http://storage3:9000/data6 \
http://storage3:9000/data7 \

We can as well just write it down by hand of course πŸ™‚

host # for DIR in 1 2
do
  for I in 1 
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

host # for DIR in 1 2 3 4 5 6 7
do
  for I in 2 3
  do
    echo -n http://
    jls | grep storage${I} | awk '{printf $3}' | sed s/.local//g
    echo -n ":9000/data${DIR} "
  done
done | sort -n

This is out list of data directories that we will use to configure Minio in FreeBSD’s main configuration /etc/rc.conf file.

http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7

Now, lets put Minio settings into the /etc/rc.conf file.

root@storageX:~ # cat > /etc/rc.conf << __EOF 
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
__EOF
root@storageX:~ # 
root@storageX:~ # cat /etc/rc.conf
minio_enable=YES
minio_disks="http://storage1:9000/data1 http://storage1:9000/data2 http://storage2:9000/data1 http://storage2:9000/data2 http://storage2:9000/data3 http://storage2:9000/data4 http://storage2:9000/data5 http://storage2:9000/data6 http://storage2:9000/data7 http://storage3:9000/data1 http://storage3:9000/data2 http://storage3:9000/data3 http://storage3:9000/data4 http://storage3:9000/data5 http://storage3:9000/data6 http://storage3:9000/data7"
root@storageX:~ #

Now we will start and configure Minio for the first time.

On each storageX server run the following set of commands.

host # jexec storage3
root@storage3:~ # 
root@storage3:/ # rm -rf /http:\*
root@storage3:/ # rm -rf /usr/local/etc/minio
root@storage3:/ # rm -rf /data?/* /data?/.minio.sys
root@storage3:/ # touch                /var/log/minio.log
root@storage3:/ # chown    minio:minio /var/log/minio.log
root@storage3:/ # mkdir -p             /usr/local/etc/minio
root@storage3:/ # chown -R minio:minio /usr/local/etc/minio
root@storage3:/ # mkdir -p             /http::
root@storage3:/ # chown -R minio:minio /http::
root@storage3:/ # mkdir -p             /http:
root@storage3:/ # chown -R minio:minio /http:
root@storage3:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.103:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.103:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.103:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage2
root@storage2:~ # 
root@storage2:/ # rm -rf /http:\*
root@storage2:/ # rm -rf /usr/local/etc/minio
root@storage2:/ # rm -rf /data?/* /data?/.minio.sys
root@storage2:/ # touch                /var/log/minio.log
root@storage2:/ # chown    minio:minio /var/log/minio.log
root@storage2:/ # mkdir -p             /usr/local/etc/minio
root@storage2:/ # chown -R minio:minio /usr/local/etc/minio
root@storage2:/ # mkdir -p             /http::
root@storage2:/ # chown -R minio:minio /http::
root@storage2:/ # mkdir -p             /http:
root@storage2:/ # chown -R minio:minio /http:
root@storage2:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.102:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.102:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.102:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
host # jexec storage1
root@storage1:~ # 
root@storage1:/ # rm -rf /http:\*
root@storage1:/ # rm -rf /usr/local/etc/minio
root@storage1:/ # rm -rf /data?/* /data?/.minio.sys
root@storage1:/ # touch                /var/log/minio.log
root@storage1:/ # chown    minio:minio /var/log/minio.log
root@storage1:/ # mkdir -p             /usr/local/etc/minio
root@storage1:/ # chown -R minio:minio /usr/local/etc/minio
root@storage1:/ # mkdir -p             /http::
root@storage1:/ # chown -R minio:minio /http::
root@storage1:/ # mkdir -p             /http:
root@storage1:/ # chown -R minio:minio /http:
root@storage1:/ # su -m minio -c 'env \\
?   MINIO_ACCESS_KEY=alibaba \\
?   MINIO_SECRET_KEY=0P3NS3S4M3 \\
?   minio server \\
?     --config-dir /usr/local/etc/minio \\
?     http://storage1:9000/data1 \\
?     http://storage1:9000/data2 \\
?     http://storage2:9000/data1 \\
?     http://storage2:9000/data2 \\
?     http://storage2:9000/data3 \\
?     http://storage2:9000/data4 \\
?     http://storage2:9000/data5 \\
?     http://storage2:9000/data6 \\
?     http://storage2:9000/data7 \\
?     http://storage3:9000/data1 \\
?     http://storage3:9000/data2 \\
?     http://storage3:9000/data3 \\
?     http://storage3:9000/data4 \\
?     http://storage3:9000/data5 \\
?     http://storage3:9000/data6 \\
?     http://storage3:9000/data7'
Created minio configuration file successfully at /usr/local/etc/minio
Waiting for the first server to format the disks.
Waiting for the first server to format the disks.
Drive Capacity: 504 GiB Free, 515 GiB Total
Status:         16 Online, 0 Offline. 

Endpoint:  http://192.168.43.101:9000
AccessKey: alibaba 
SecretKey: 0P3NS3S4M3 

Browser Access:
   http://192.168.43.101:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.43.101:9000 alibaba 0P3NS3S4M3

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Here is how it looks in the xterm terminal.

minio-first-run-setup

We can now verify in the browser that it actually works.

minio-browser-01

Now hit [CTRL]+[C] in each of these windows to stop the Minio cluster.

We will now start Minio with FreeBSD rc(8) subsystem as a service.

root@storage1:/ # service minio start
Starting minio.
root@storage1:/ # cat /var/log/minio.log 
root@storage1:/ # service minio status
minio is running as pid 50309.

Lets check if it works.

root@storage1:/ # ps -U minio
  PID TT  STAT    TIME COMMAND
50308  -  IsJ  0:00.00 daemon: /usr/bin/env[50309] (daemon)
50309  -  IJ   0:00.27 /usr/local/bin/minio -C /usr/local/etc/minio server (...)

Now we will do some basic operations, login into Minio distributed storage, create new bucket and upload some file to it.

minio-browser-02

This is how empty Minio cluster looks like.

minio-browser-03

Select Create Bucket option from the button below.

minio-browser-04-create-bucket

We will use name test for our new bucket.

minio-browser-05-create-bucket

It is created and we can access it.

minio-browser-06-bucket

Lets Upload File using same menu as previously.

minio-browser-07-file-upload

The upload progress shown by Minio.

minio-browser-08-file-upload

File has been indeed uploaded.

minio-browser-09-file-upload

By clicking on it we may access it directly from the browser.

minio-browser-10-file-display

We can also share link to that file by using the File Menu as shown below.

minio-browser-10-file-link

The link creation dialog is shown below.

minio-browser-11-file-link

minio-browser-12-file-link

Lets see how Minio distributes the data – the ThinkPad Design – Spirit and Essence.pdf file in out case – over its data directories spread across the servers.

host # jexec storage1
root@storage1:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage1:/ # exit
host # jexec storage2
root@storage2:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage2:/ # exit
host # jexec storage3
root@storage3:/ # find /data?/test
/data1/test
/data1/test/ThinkPad Design - Spirit and Essence.pdf
/data1/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data1/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test
/data2/test/ThinkPad Design - Spirit and Essence.pdf
/data2/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data2/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data3/test
/data3/test/ThinkPad Design - Spirit and Essence.pdf
/data3/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data3/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test
/data4/test/ThinkPad Design - Spirit and Essence.pdf
/data4/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data4/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data5/test
/data5/test/ThinkPad Design - Spirit and Essence.pdf
/data5/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data5/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data6/test
/data6/test/ThinkPad Design - Spirit and Essence.pdf
/data6/test/ThinkPad Design - Spirit and Essence.pdf/part.1
/data6/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test
/data7/test/ThinkPad Design - Spirit and Essence.pdf
/data7/test/ThinkPad Design - Spirit and Essence.pdf/xl.json
/data7/test/ThinkPad Design - Spirit and Essence.pdf/part.1
root@storage3:/ # exit

We can also see what Minio configuration file /usr/local/etc/minio/config.json has been generated.

host # jexec storage1
root@storage1:/ # cat /usr/local/etc/minio/config.json 
{
        "version": "22",
        "credential": {
                "accessKey": "alibaba",
                "secretKey": "0P3NS3S4M3"
        },
        "region": "",
        "browser": "on",
        "domain": "",
        "storageclass": {
                "standard": "",
                "rrs": ""
        },
        "notify": {
                "amqp": {
                        "1": {
                                "enable": false,
                                "url": "",
                                "exchange": "",
                                "routingKey": "",
                                "exchangeType": "",
                                "deliveryMode": 0,
                                "mandatory": false,
                                "immediate": false,
                                "durable": false,
                                "internal": false,
                                "noWait": false,
                                "autoDeleted": false
                        }
                },
                "elasticsearch": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "url": "",
                                "index": ""
                        }
                },
                "kafka": {
                        "1": {
                                "enable": false,
                                "brokers": null,
                                "topic": ""
                        }
                },
                "mqtt": {
                        "1": {
                                "enable": false,
                                "broker": "",
                                "topic": "",
                                "qos": 0,
                                "clientId": "",
                                "username": "",
                                "password": "",
                                "reconnectInterval": 0,
                                "keepAliveInterval": 0
                        }
                },
                "mysql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "dsnString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "nats": {
                        "1": {
                                "enable": false,
                                "address": "",
                                "subject": "",
                                "username": "",
                                "password": "",
                                "token": "",
                                "secure": false,
                                "pingInterval": 0,
                                "streaming": {
                                        "enable": false,
                                        "clusterID": "",
                                        "clientID": "",
                                        "async": false,
                                        "maxPubAcksInflight": 0
                                }
                        }
                },
                "postgresql": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "connectionString": "",
                                "table": "",
                                "host": "",
                                "port": "",
                                "user": "",
                                "password": "",
                                "database": ""
                        }
                },
                "redis": {
                        "1": {
                                "enable": false,
                                "format": "",
                                "address": "",
                                "password": "",
                                "key": ""
                        }
                },
                "webhook": {
                        "1": {
                                "enable": false,
                                "endpoint": ""
                        }
                }
        }

S3FS

We can also mount that test bucket from out distributed Minio object storage cluster as filesystem using the S3FS project. Lets add s3fs package and mount our bucket.

host # pkg install -y fusefs-s3fs

Now we will configure password for our bucket.

host # echo test:alibaba:0P3NS3S4M3 > /root/.passwd-s3fs
host # chmod 600 /root/.passwd-s3fs
host # cat /root/.passwd-s3fs 
test:alibaba:0P3NS3S4M3

Now lets do the actual mount.

host # mkdir /tmp/test
host # s3fs \
  -o allow_other \
  -o use_path_request_style \
  -o url=http://192.168.43.101:9000 \
  -o passwd_file=/root/.passwd-s3fs \
  test /tmp/test

The file ThinkPad Design – Spirit and Essence.pdf that we put through web interface should be here.

host # exa -l /tmp/test
.--------- 10M root 2018-04-16 14:15 ThinkPad Design - Spirit and Essence.pdf

host # file /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf 
/tmp/test/ThinkPad Design - Spirit and Essence.pdf: PDF document, version 1.4

host # stat /tmp/test/ThinkPad\ Design\ -\ Spirit\ and\ Essence.pdf
3976265496 2 ---------- 1 root wheel 0 10416953 "Jan  1 01:00:00 1970" "Apr 16 14:35:35 2018" "Jan  1 01:00:00 1970" "Jan  1 00:59:59 1970" 4096 20346 0 /tmp/test/ThinkPad Design - Spirit and Essence.pdf

We can now upload other file into that bucket using s3fs mount.

host # cp -v /home/vermaden/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf /tmp/test
/home/vermaden/On the Shortness of Life - Lucius Seneca.pdf -> /tmp/test/On the Shortness of Life - Lucius Seneca.pdf

host # file /tmp/test/On\ the\ Shortness\ of\ Life\ -\ Lucius\ Seneca.pdf 
On the Shortness of Life - Lucius Seneca.pdf: PDF document, version 1.4

We can also verify that our file put through s3fs is visible on the web interface.

minio-browser-13-s3fs-upload

Real Hardware

Now, as we have working Proof of Concept for the distributed Minio setup how about putting it on a real hardware for real storage purposes? I would setup a 16 node Minio distributed server on a Supermicro SSG-5018D8-AR12L hardware. Supermicro even suggests using that kind of servers for object storage, here is their white paper on that topic – Object Storage Solution for Data Archive using Supermicro SSG-5018D8-AR12L and OpenIO SDS – but they use OpenIO not Minio for distributed object storage solution.

This server features the Supermicro X10SDV-7TP4F motherboard. This is important as this motherboard officially supports FreeBSD 11.x operating system on their Supermicro OS Compatibility page.

Motherboard specification has these features.

 1 x Intel Xeon D-1537 8-Core / 16-Threads TDP 35W
 4 x UDIMM for up to 128GB ECC RDIMM DDR4 2133MHz
12 x 3.5" SAS2/SATA3 Hot-Swap HDD Bays
 4 x 2.5" Cold-Swap HDD Bays
 1 x Controller Intel SoC for 4 SATA3 (6Gbps) Ports
 1 x Controller Broadcom 2116 for 16 SATA3 (6Gbps) Ports
 1 x Expansion Slot PCI-E 3.0 x8 
 1 x Expansion Slot M.2 PCIe 3.0 x4
 1 x Expansion Slot Mini-PCIe w/ mSATA Support
 2 x 10G SFP+ Port
 2 x 1GbE LAN Port
 2 x External USB 3.0 Port
 1 x Interlal USB 2.0 Port
 2 x 400W High-Rfficiency Redundant Power Supplies

You can configure your own and get approximated price using the Thinkmate site from here:
https://www.thinkmate.com/system/superstorage-server-5018d8-ar12l

I would add this components to the basic setup:

 4 x UDIMM FULL 128 GB ECC RDIMM DDR4
 2 x 240GB Micron 5100 MAX 2.5" SATA 6.0Gb/s SSD
 2 x 7.68TB Micron 5200 ECO Series 2.5" SATA 6.0Gb/s SSD
12 x 12TB SATA 6.0Gb/s 7200RPM 3.5" Hitachi Ultrastarβ„’ He12
 3 x SanDisk Cruzer Fit 32GB USB 3.0

Now, I will use the 3 x SanDisk Cruzer Fit 32GB USB 3.0 disks to install FreeBSD as a ZFS root/boot pool with mirror + spare on these disks. We do not need performance here.

Then, the 12 x 12TB SATA 6.0Gb/s 7200RPM 3.5″ Hitachi Ultrastarβ„’ He12 drives will be used as RAIDZ (RAID5 equivalent in ZFS without the write hole) for the Minio data, wich 11 + 1 setup, which means 11 drives for data and 1 drive for parity. As we can lose HALF of the Minio servers I would not waste 12 TB drive for spare here. Then, I would use 2 x 240GB Micron 5100 MAX 2.5″ SATA 6.0Gb/s SSD in mirror for the ZFS ZIL (ZFS Intent Log) to accelerate writes and 2 x 7.68TB Micron 5200 ECO Series 2.5″ SATA 6.0Gb/s SSD for the ZFS read cache (L2ARC).

The network would be setup on 2 x 10G SFP+ Port with LACP as lagg0 interface so each server would have 20 Gbit connectivity. This will give us a total of 320 Gbit theoretical network throughput.

This setup would give as 132 TB ZFS pool space with 15 TB for read cache and 240 GB for writes for single 1U server. Making the calculations this will give as 2112 TB (more then 2 PB) of space for Minio data.

With Minio algorithm for data redundancy we will have about 1 PB of usable storage space in our 16U Object Storage FreeBSD Appliance.

Not bad for my taste πŸ™‚

UPDATE 1

The Distributed Object Storage with Minio on FreeBSD article was included in the BSD Now 246 – Disclosure episode.

Thanks for mentioning!

EOF

FreeBSD Network Management with network.sh Script

When You use only one connection on FreeBSD, then the best practice is to just put its whole configuration into the /etc/rc.conf file, for example typical server redundant connection would look like that one below.

cloned_interfaces="lagg0"
ifconfig_igb0="-lro -tso -vlanhwtag mtu 9000 up"
ifconfig_igb1="-lro -tso -vlanhwtag mtu 9000 up"
ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 up"
ifconfig_lagg0_alias0="inet 10.254.17.2/24"

If You must use more then one connection and You often switch between them, sometimes several times a day, then using the main FreeBSD’s config file is not the most convenient way for such operations.

For laptops where You often switch between WWAN (usually 3G connection) and WLAN (typical WiFi connection) and even LAN cable.

You can of course use graphical NetworkMgr from GhostBSD project which is described as “Python GTK3 network manager for FreeBSD, GhostBSD, TrueOS and DragonFlyBSD. NetworkMgr support both netif and OpenRC network” citing the project site – https://github.com/GhostBSD/networkmgr – it is also available in FreeBSD Ports and as package – net-mgmt/networkmgr.

GhostBSD-networkmgr

What I miss in NetworkMgr is the WWAN connection management, DNS management, optional random MAC generation and network shares unmount at disconnect from network. With my solution – network.sh – you still need to edit /etc/wpa_supplicant.conf and /etc/ppp/ppp.conf files by hand so it’s also not a perfect solution for typical desktop usage, but you do not edit these files every day.

As I use WWAN, WLAN and LAN connections on my laptop depends on the location I wrote a script to automate this connection management in a deterministic and convenient way, at least for me.

It can also set DNS to some safe/nologging providers or even a random safe DNS and generate legitimate MAC address for both LAN and WLAN if needed, even with real OUI first three octets if You also have additional network.sh.oui.txt file with them inside.

Here is the network.sh script help message.

% network.sh help 
USAGE:
  network.sh TYPE [OPTIONS]

TYPES:
  lan
  wlan
  wwan
  dns

OPTIONS:
  start
  start SSID|PROFILE
  stop
  example

EXAMPLES:
  network.sh lan start
  network.sh lan start IP.IP.IP.IP/MASK
  network.sh lan start IP.IP.IP.IP/MASK GW.GW.GW.GW
  network.sh lan restart
  network.sh wlan start
  network.sh wlan start HOME-NETWORK-SSID
  network.sh wwan example
  network.sh dns onic
  network.sh dns udns
  network.sh dns nextdns
  network.sh dns cloudflare
  network.sh dns ibm
  network.sh dns random
  network.sh dns IP.IP.IP.IP
  network.sh doas
  network.sh sudo
  network.sh status

If You run network.sh with appreciate arguments to start network connection it will display on the screen what commands it would run to achieve that. It also makes use of sudo(8) or doas(1) assuming that You are in the network group. To add yourself into the network group type this command below.

# pw groupmod network -m yourself

The network.sh doas command will print what rights it needs to work without root privileges, same for network.sh sudo command, an example below.

% network.sh doas
  # pw groupmod network -m YOURUSERNAME
  # cat /usr/local/etc/doas.conf
  permit nopass :network as root cmd /etc/rc.d/netif args onerestart
  permit nopass :network as root cmd /usr/sbin/service args squid onerestart
  permit nopass :network as root cmd dhclient
  permit nopass :network as root cmd ifconfig
  permit nopass :network as root cmd killall args -9 dhclient
  permit nopass :network as root cmd killall args -9 ppp
  permit nopass :network as root cmd killall args -9 wpa_supplicant
  permit nopass :network as root cmd ppp
  permit nopass :network as root cmd route
  permit nopass :network as root cmd tee args -a /etc/resolv.conf
  permit nopass :network as root cmd tee args /etc/resolv.conf
  permit nopass :network as root cmd umount
  permit nopass :network as root cmd wpa_supplicant

The network.sh script does not edit /usr/local/etc/doas.conf or /usr/local/etc/sudoers files, You have to put these lines there by yourself. An example doas setup for network.sh script is below.

# pkg install -y doas

# cat >> /usr/local/etc/doas.conf << __EOF
permit nopass :network as root cmd /etc/rc.d/netif args onerestart
permit nopass :network as root cmd /usr/sbin/service args squid onerestart
permit nopass :network as root cmd dhclient
permit nopass :network as root cmd ifconfig
permit nopass :network as root cmd killall args -9 dhclient
permit nopass :network as root cmd killall args -9 ppp
permit nopass :network as root cmd killall args -9 wpa_supplicant
permit nopass :network as root cmd ppp
permit nopass :network as root cmd route
permit nopass :network as root cmd tee args -a /etc/resolv.conf
permit nopass :network as root cmd tee args /etc/resolv.conf
permit nopass :network as root cmd umount
permit nopass :network as root cmd wpa_supplicant
__EOF
# 

# pw groupmod network -m yourself

The network.sh script upon disconnect would also forcefully unmount all network shares.

The idea is that it does only one connection type at a time, When You type network.sh lan start and then type network.sh wlan start, then it will reset entire FreeBSD network stack to defaults (to settings that are in /etc/rc.conf file) and then connect to WiFi in a ‘clean network environment’ as I could say. As I use 3 different methods of connecting to various networks I do not have any network settings in the /etc/rc.conf file, but You may prefer for example to have DHCP for local LAN enabled if that is more convenient for You.

The settings are on the beginning of the network.sh script, You should modify them to your needs and hardware that You own.

# SETTINGS
LAN_IF=em0
LAN_RANDOM_MAC=0
WLAN_IF=wlan0
WLAN_PH=iwn0
WLAN_RANDOM_MAC=0
WLAN_COUNTRY=PL
WLAN_REGDOMAIN=NONE
WWAN_IF=tun0
WWAN_PROFILE=WWAN
NAME=${0##*/}
NETFS="nfs,smbfs,fusefs.sshfs"
TIMEOUT=16
DELAY=0.5
SUDO_WHICH=0
SUDO=0
DOAS_WHICH=0
DOAS=1
ROOT=0

You can specify other NETFS filesystems that You want to forcefully unmount during network stop or set different physical WLAN adapter (WLAN_PH option), like ath0 for Atheros chips. similar for LAN interface which also defaults to Intel based network card with em0 driver (LAN_IF option).

If you want to disable random MAC address for LAN with LAN_RANDOM_MAC=0 and enable generation of random MAC address for WiFi networks with WLAN_RANDOM_MAC=1 option.

You should also decide if You want to use sudo (SUDO option) or doas (DOAS option).

Here is network.sh script.

Here is example of all network connections stop.

% network.sh stop
doas killall -9 wpa_supplicant
doas killall -9 ppp
doas killall -9 dhclient
doas ifconfig wlan0 destroy
doas ifconfig em0 down
echo | doas tee /etc/resolv.conf
doas /etc/rc.d/netif onerestart
%

Here is example of WLAN (or should I say WiFi) network connection start.

% network.sh wlan start
__network_reset()
__net_shares_umount()
doas killall -9 wpa_supplicant
doas killall -9 ppp
doas killall -9 dhclient
doas ifconfig em0 down
doas ifconfig wlan0 down
echo | doas tee /etc/resolv.conf
doas /etc/rc.d/netif restart
doas ifconfig wlan0 up
doas ifconfig wlan0 scan
doas ifconfig wlan0 ssid -
doas wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf -s -B
__wlan_wait_associated()
doas dhclient -q wlan0
__dns_check_gateway()
echo | doas tee /etc/resolv.conf
echo 'nameserver 10.0.0.1' | doas tee -a /etc/resolv.conf
__dns_check()
__squid_restart()
doas ifconfig wlan0 powersave

Here is example od DNS change.

% network.sh dns ibm
echo | doas tee /etc/resolv.conf
echo 'nameserver 9.9.9.9' | doas tee -a /etc/resolv.conf

If You have any problems with the network.sh script then let me know, I will try to fix them ASAP.

If You are more into OpenBSD then FreeBSD then Vincent Delft wrote nmctlNetwork Manager Control tool for OpenBSD – available here – http://vincentdelft.be/post/post_20171023.

Ther is also another OpenBSD project by Aaron Poffenberger for network management – netctl – cli network-location manager for OpenBSD – available here – https://github.com/akpoff/netctl.

UPDATE 1 – Connect to Open/Unsecured WiFi Network

Recently when I was attending the Salt workshop during NLUUG Autumn Conference 2018 at Utrecht, Nederlands I wanted to connect to open unsecured WiFi network called 'Utrecht Hotel'. My phone of course attached to it instantly but on the other hand FreeBSD was not able to connect to it. As it turns out if you want to enable wpa_supplicant(8) to connect to open unsecured network a separate /etc/wpa_supplicant.conf option is needed (on option for all open unsecured
networks – no need to create such rule for each open/unsecured network).

Its these lines in the /etc/wpa_supplicant.conf file:

% grep -C 2 key_mgmt=NONE /etc/wpa_supplicant.conf

network={
        key_mgmt=NONE
        priority=0
}

I also modified the network.sh to contain that information in the examples section and also made little fix to always reset the previously set/forced SSID during earlier usage.

# ifconfig wlan0 ssid -

Now the network.sh should be even more pleasant to use.

UPDATE 2 – Openbox Integration

In on of the FreeBSD Desktop series articles I described how to setup Openbox window manager – FreeBSD Desktop – Part 12 – Configuration – Openbox – available here.

Below is an example of integration of that network.sh script with Openbox window manager.

network.sh.openbox.menu.jpg

… and here is the code used in the ~/.config/openbox/menu.xml file.

network.sh.openbox.menu.code

UPDATE 3 – Updated Status Page

I have jest added reworked status page to the network.sh script.

Its already updated in the GitHub ‘network’ repository:
https://github.com/vermaden/scripts/blob/master/network.sh

Here is how it looks.

network.sh.status.png

UPDATE 4 – Major Rework

After using network.sh script for a while I saw some needed changes. Time has come and I finally made them. I also find a problem when already about creating wlan0 virtual device from physical device (like iwn0).

When you start network.sh script for the first time and wlan0 is not yet created then the problem does not exists but when wlan0 already exists then network.sh waited for whopping 22 seconds on this single command. Now network.sh checks if the wlan0 device already exists which allows now WiFi connection in less then 3 seconds.

Before.

#DOAS# permit nopass :network as root cmd ifconfig
#SUDO# %network ALL = NOPASSWD: /sbin/ifconfig *
${CMD} ifconfig ${WLAN_IF} create wlandev ${WLAN_PH} 2> /dev/null
echo ${CMD} ifconfig ${WLAN_IF} create wlandev ${WLAN_PH}

After.

if ! ifconfig ${WLAN_IF} 1> /dev/null 2> /dev/null
then
  #DOAS# permit nopass :network as root cmd ifconfig
  #SUDO# %network ALL = NOPASSWD: /sbin/ifconfig *
  ${CMD} ifconfig ${WLAN_IF} create wlandev ${WLAN_PH} 2> /dev/null
  echo ${CMD} ifconfig ${WLAN_IF} create wlandev ${WLAN_PH}
fi

I used gnomon to benchmark the script execution.

Here is its simple installation process.

# pkg install -y npm
# npm install -g gnomon

Here is how it performed before the optiomization. About 25 seconds.

network.sh.SLOW.CREATE.before

And here is how it performs now. About 3 seconds.

network.sh.SLOW.CREATE.after

At first I suspected the /etc/rc.d/netif FreeBSD startup script but the real enemy was the tart ifconfig wlan0 create wlandev iwn0 command.

I also made it more verbose to better know where the time is wasted.

network.sh.more.verbous

Its now possible to set static IP and gateway in LAN mode and static IP in DNS mode.

network.sh.static.lan

The complete summary of changes and improvements is here:

  • Static IP address and gateway on LAN now possible.
  • Specify DNS by IP address.
  • Simplified __random_mac() function.
  • Fixed __wlan_wait_associated() function.
  • Removed unneded call for “create wlandev” in WLAN mode.
  • Other minor fixes.
  • WiFi (re)connection now possible under 3 seconds instead of 25+ seconds.

I also created a dedicated GitHub repository for network.sh script.

EOF