Tag Archives: openzfs

ZFS on SMR Drives

The ZFS filesystem (more often called OpenZFS lately – as the project name) is a great filesystem for many purposes. From home or desktop/laptop solutions to enterprise offerings. Traditional disk drives have non overlapping magnetic tracks parallel to each other. These are PMR disks (Perpendicular Magnetic Recording). Hard disk drive manufacturers – to pack even more data into the same size platters – also offer SMR disks. In SMR disks data tracks are written to overlap part of previously written track – this results in narrower tracks and higher density. I will try to visualize this difference below using my favorite Enterprise Architect ASCII Edition software.

 PMR                    SMR

[xxx][___][___][___]   [xx[__[__[___]
[___][xxx][___][___]   [__[xx[__[___]
[___][___][xxx][___]   [__[__[xx[___]
[___][___][___][xxx]   [__[__[__[xxx]
[___][xxx][___][xxx]   [__[xx[__[xxx]
[xxx][___][___][xxx]   [xx[__[__[xxx]

12345678901234567890   12345678901234

I marked the filled blocks on both disks with xxx marks. As you can compare the below ‘size’ of the taken place the same data on SMR disk takes less physical space then on traditional PMR drives. This comes at a price through. Writes are little ‘crippled’ comparing to PMR drives. Especially heavy and random I/O writes are ‘problematic’ and slower on SMR drives … but it does not mean they are useless.

disk

For the backup or clone purposes they are more then enough. I personally use SMR drives for my backup solutions. Its just about price/performance ratio.

Here are mine backup solutions based on the SMR drives:

Speed

How ZFS behaves on SMR drives? Very well I would say. ZFS tries to pack as much random I/O into sequential with its ZFS features – described in detail in the zpool-features(7) man page for example.

I recently tried ZFS on top of GELI encrypted partition on a 5 TB external USB SMR drive. I needed to copy little more then 3 TB of data there. I used rsync(1) for that purpose. These are the arguments I use for my rsync(1) jobs.

% rsync --modify-window=1 -l -t -r -D -v -S -H --force    \
        --progress --no-whole-file --numeric-ids --delete \
        /files/ /media/external/files/

Of course I do not write all these options by hand – I just a script wrapper for that – rsync-delete.sh – available on my scripts page.

As I started to copy files on the drive I watched the write speeds using iostat(8) and zpool-iostat(8) tools. I expected quite slow operation but even with the enabled zstd compression and AES-XTS 256bit GELI encryption I got pretty decent results.

Here are the iostat(8) results. Each line means average of 10 minutes (600 seconds). Check the speeds for da0 drive below.

% iostat 600
       tty            ada0             ada1              da0             cpu
 tin  tout KB/t  tps  MB/s  KB/t  tps  MB/s  KB/t  tps  MB/s  us ni sy in id
   1     1  513  120  59.9  29.5   39   1.1   742   65  46.8   4  8 17  2 69
   0     2  615   94  56.6  19.1   22   0.4   751   68  49.8   1  3 14  1 82
   0     0  561  106  57.9  17.9   20   0.4   760   70  52.0   1  2 14  1 82
   0     0 1015   57  56.8  18.4   16   0.3   769   68  50.9   1  3 15  1 81
   0     0 1017   57  56.3  18.5   16   0.3   757   68  50.6   1  3 14  1 81
   0     1  752   72  53.0  16.6   23   0.4   765   67  50.1   1  1 13  0 85
   0     0 1014   51  50.1  16.5   21   0.3   723   68  48.3   1  1 13  0 86
   0     0 1012   51  50.2  19.8   18   0.3   743   68  49.2   1  1 12  0 86

And here are the zpool-iostat(8) results.

% zpool iostat POOL 600
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
POOL        3.18T  1.37T      7     56  53.5K  40.7M
POOL        3.20T  1.34T      0     57  9.01K  41.4M
POOL        3.22T  1.33T      0     47  3.29K  32.3M
POOL        3.24T  1.31T      0     47  5.59K  33.9M
POOL        3.25T  1.29T      0     43  3.39K  24.3M
POOL        3.27T  1.28T      0     42  3.01K  25.5M
POOL        3.28T  1.27T      0     44  3.14K  26.8M
POOL        3.29T  1.26T      0     42  3.49K  23.9M

The drive was attached over USB 3.0 port so there was not 35 MB/s limitation from USB 2.0 port. I would say that the results are very decent and consistent.

Tuning

There are several settings that can help you squeeze maximum from these SMR drives on ZFS filesystem.

First are ZFS pool settings. You want the latest zstd compression to save some space. Also better compression means less physical bytes need to be written to the drive so less I/O operations. You should also turn atime into off state as it will not be needed. You should also increase recordsize to something really big like 1m (1 megabyte) so you will get higher compressratio and also will need to have less metadata for more ZFS blocks. Keep in mind that ZFS will still use variable block size and not only the 1m maximum. If something is smaller (like 100k) then it would take for example 80k (after applied zstd compression). You will not waste 920k here πŸ™‚

Keep in mind that most newer and larger drives use 4k blocks (instead of 512b). Sometimes its 512e method which means that drive firmware will ‘present’ device with 512b blocks while underneath these eight 512k blocks just lay down on a single 4k block. For these reasons its important to keep in mind several things.

When adding new partitions with gpart(8) remember to align them to 4k with -a 4k argument.

# gpart add -t freebsd-zfs -a 4k da0

Next – when initializing the geli(8) encryption layer – make sure you add -s 4096 argument.

# geli init -s 4096 /dev/da0p1

The last thing is ZFS pool creation with proper ashift property – it can not be changed later. On FreeBSD UNIX its done that way:

# sysctl vfs.zfs.min_auto_ashift=12
# zpool create POOL da0
# zdb -C POOL | grep ashift
                ashift: 12

If you are curious what 12 means then below table will help you:

ASHIFT  BLOCKSIZE
     9  512b
    10  1k
    11  2k
    12  4k
    13  8k

Last but not least is the redundant_metadata option. By default its at all setting but its desired to set it into the most state. Do you need redundant metadata? I think not. When your single drive will fail the redundant metadata would not help and if your ZFS pool have some redundancy level like raidz or mirror then redundant metadata is also not needed because its just ‘normally’ redundant being spread across several disks.

Keep in mind that ZFS resilver process on some of these SMR drives can take forever. Some people from Reddit reported that they successfully resilvered their ZFS pools with SMR drives but that does not have to be the case for all SMR drives out there. You can also check Ars Technica tests of resilver on SMR disks.

Here is the summary of ZFS tunables suggested – you will find in depth description of all of them in the zfsprops(7) man page.

# zfs set redundant_metadata=most POOL
# zfs set compression=zstd        POOL
# zfs set atime=off               POOL
# zfs set recordsize=1m           POOL

In theory the TRIM operations upon deletion would create additional unwanted ‘stress’ for SMR drives which would mean that TRIM operations should be disabled for on non-SSD drives and you can disable them entirely on the ZFS pool level … but.

TRIM commands issued by the operating system allows SMR HDD internal controller to get the information that certain areas/blocks on that SMR HDD plates are no longer in use. It means that writes to such areas could be performed without slow read-modify-write pattern.

This means we are leaving the autotrim option as on (enabled) for SMR drives.

# zpool autotrim=on POOL

Also – if needed – you can manually trigger the TRIM operations with this command.

# zpool trim POOL
# zpool status POOL
  pool: POOL
 state: ONLINE
  scan: scrub repaired 0B in 02:17:22 with 0 errors on Sun May  8 05:18:22 2022
config:

        NAME          STATE     READ WRITE CKSUM
        POOL          ONLINE       0     0     0
          da0p1.eli   ONLINE       0     0     0  (trimming)

errors: No known data errors


By default the TRIM commands are executed at 64 rate on FreeBSD. You can limit them to 1 and still have them enabled with following sysctl(8) tunable.

# sysctl vfs.zfs.vdev.trim_max_active=1

If you want to make it survive across reboots then put it into the /etc/sysctl.conf file.

Logic could suggest that simpler/older filesystems such as FreeBSD UFS for example could be more suitable solution for SMR drives … but the reality shows that not so much. Check this Reddit thread for example – Appalling Performance on External USB SMR Drive – to name just one.

Hope this article will help you get most of your SMR drives.

Regards.

EOF

ZFS Compatibility

The best free filesystem on Earth – ZFS – also often named OpenZFS recently – has also become very portable in recent years of its development. The OpenZFS Distributions page lists 6 (six) operating systems already.

They are:

  • FreeBSD
  • Illumos
  • Linux
  • MacOS
  • NetBSD
  • Windows

… but if you would like to create a ZFS pool compatible with all of them … which options and ZFS features should you choose? There is OpenZFS Feature Flags page dedicated exactly to that topic.

zfs-feature-flags

These are the ones that have yes value in all operating systems.

  • async_destroy
  • bookmarks
  • empty_bpobj
  • enabled_txg
  • filesystem_limits
  • lz4_compress
  • hole_birth
  • multi_vdev_crash_dump
  • spacemap_histogram

I would also include these as only older NetBSD 4.0.5 version does not support them – but they are supported in newer NetBSD 5.3 version.

  • embedded_data
  • large_blocks
  • sha512
  • skein

There is also a dedicated zpool-features(7) man page for that information on the OpenZFS page and also zpool-features(7) man page on the FreeBSD page.

On the FreeBSD system the /usr/share/zfs/compatibility.d directory has files with supported ZFS Feature Flags for many major operating systems and ZFS versions.

% ls /usr/share/zfs/compatibility.d
2018
2019
2020
2021
compat-2018
compat-2019
compat-2020
compat-2021
freebsd-11.0
freebsd-11.1
freebsd-11.2
freebsd-11.3
freebsd-11.4
freebsd-12.0
freebsd-12.1
freebsd-12.2
freenas-11.0
freenas-11.1
freenas-11.2
freenas-11.3
freenas-9.10.2
grub2
openzfs-2.0-freebsd
openzfs-2.0-linux
openzfs-2.1-freebsd
openzfs-2.1-linux
openzfsonosx-1.7.0
openzfsonosx-1.8.1
openzfsonosx-1.9.3
openzfsonosx-1.9.4
truenas-12.0
ubuntu-18.04
ubuntu-20.04
zol-0.6.1
zol-0.6.4
zol-0.6.5
zol-0.7
zol-0.8

Unfortunately it misses NetBSD and Illumos systems for example … but having information from the OpenZFS Feature Flags page we can find Feature Flags set that will be supported everywhere.

Here are the stats for supported ZFS Feature Flags. The higher the number the more operating systems and ZFS version it covers.

% grep -h '^[^#]' /usr/share/zfs/compatibility.d/* \
    | sort -n \
    | uniq -c \
    | sort -n
   2 draid
   5 bookmark_written
   5 device_rebuild
   5 livelist
   5 log_spacemap
   5 redacted_datasets
   5 redaction_bookmarks
   5 zstd_compress
   9 allocation_classes
   9 bookmark_v2
   9 project_quota
   9 resilver_defer
  10 edonr
  10 encryption
  11 large_dnode
  11 userobj_accounting
  18 spacemap_v2
  20 device_removal
  20 obsolete_counts
  20 zpool_checkpoint
  31 sha512
  31 skein
  32 multi_vdev_crash_dump
  36 filesystem_limits
  36 large_blocks
  37 bookmarks
  37 embedded_data
  37 enabled_txg
  37 extensible_dataset
  37 hole_birth
  37 spacemap_histogram
  38 async_destroy
  38 empty_bpobj
  38 lz4_compress

As the GNU GRUB is very outdated when it comes to ZFS support it should be pretty bulletproof to use it as a starting point of limited ZFS Feature Flags support.

% cat /usr/share/zfs/compatibility.d/grub2
# Features which are supported by GRUB2
async_destroy
bookmarks
embedded_data
empty_bpobj
enabled_txg
extensible_dataset
filesystem_limits
hole_birth
large_blocks
lz4_compress
spacemap_histogram

To make sure we are compatible we will now cross-link the GNU GRUB data.

First we will ‘generate’ the grep(1) command arguments that we will use in the next command.

% grep '^[^#]' /usr/share/zfs/compatibility.d/grub2 \
  | while read I
    do
      echo "-e ' ${I}' \\"
    done
-e ' async_destroy' \
-e ' bookmarks' \
-e ' embedded_data' \
-e ' empty_bpobj' \
-e ' enabled_txg' \
-e ' extensible_dataset' \
-e ' filesystem_limits' \
-e ' hole_birth' \
-e ' large_blocks' \
-e ' lz4_compress' \
-e ' spacemap_histogram' \

Lets now use these arguments to filter the ZFS features.

% grep -h '^[^#]' /usr/share/zfs/compatibility.d/grub2/* \
    | sort -n \
    | uniq -c \
    | sort -n \
    | grep -e ' async_destroy' \
           -e ' bookmarks' \
           -e ' embedded_data' \
           -e ' empty_bpobj' \
           -e ' enabled_txg' \
           -e ' extensible_dataset' \
           -e ' filesystem_limits' \
           -e ' hole_birth' \
           -e ' large_blocks' \
           -e ' lz4_compress' \
           -e ' spacemap_histogram' \
    | wc -l
      11

% grep -h '^[^#]' /usr/share/zfs/compatibility.d/grub2 | wc -l
      11

So if seems that GRUB list of ZFS Feature Flags seems pretty compatible.

Lets now cross-reference that GRUB data with the data from the OpenZFS Feature Flags page.

I will create new /usr/share/zfs/compatibility.d/OZFF file that has these ZFS Feature Flags as content.

% cat /usr/share/zfs/compatibility.d/OZFF
async_destroy
bookmarks
empty_bpobj
enabled_txg
filesystem_limits
lz4_compress
hole_birth
multi_vdev_crash_dump
spacemap_histogram
embedded_data
large_blocks
sha512
skein

% wc -l /usr/share/zfs/compatibility.d/OZFF
      13

So there are 11 GRUB ZFS Feature Flags and 13 OpenZFS ‘Compatible’ Feature Flags.

Lets see how it they compare.

% cat /usr/share/zfs/compatibility.d/OZFF \
    | grep -e async_destroy \
           -e bookmarks \
           -e embedded_data \
           -e empty_bpobj \
           -e enabled_txg \
           -e extensible_dataset \
           -e filesystem_limits \
           -e hole_birth \
           -e large_blocks \
           -e lz4_compress \
           -e spacemap_histogram \
    | wc -l
      10

I expected 11 here instead of 10 … we will not have to compare GRUB results to the OpenZFS Feature Flags section.

% grep -h '^[^#]' /usr/share/zfs/compatibility.d/{grub2,OZFF} \
    | sort -n \
    | uniq -c \
    | sort -n
   1 extensible_dataset
   1 multi_vdev_crash_dump
   1 sha512
   1 skein
   2 async_destroy
   2 bookmarks
   2 embedded_data
   2 empty_bpobj
   2 enabled_txg
   2 filesystem_limits
   2 hole_birth
   2 large_blocks
   2 lz4_compress
   2 spacemap_histogram

Seems that we finally got our 10 most compatible OpenZFS Feature Flags set.

Its this set:

  • async_destroy
  • bookmarks
  • embedded_data
  • empty_bpobj
  • enabled_txg
  • filesystem_limits
  • hole_birth
  • large_blocks
  • lz4_compress
  • spacemap_histogram

To make it more comfortable to use we will put them it into the separate /usr/share/zfs/compatibility.d/COMPATIBLE file.

# cat /usr/share/zfs/compatibility.d/COMPATIBLE
async_destroy
bookmarks
embedded_data
empty_bpobj
enabled_txg
filesystem_limits
hole_birth
large_blocks
lz4_compress
spacemap_histogram

Lets now try to make that best effort most-compatible ZFS pool.

I will use one of my scripts – mdconfig.sh – to easy manipulate md(4) memory disks on FreeBSD.

# truncate -s 1g FILE
# mdconfig.sh -c FILE
IN: created vnode at /dev/md0
# zpool create -o compatibility=COMPATIBLE compatible /dev/md0
# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
compatible   960M   116K   960M        -         -     0%     0%  1.00x    ONLINE  -
zroot        118G  48.6G  69.4G        -         -    36%    41%  1.00x    ONLINE  -


Now lets see what is zpool upgrade command showing us.

# zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(7) for details.

Note that the pool 'compatibility' feature can be used to inhibit
feature upgrades.

POOL  FEATURE
---------------
compatible
      multi_vdev_crash_dump
      large_dnode
      sha512
      skein
      userobj_accounting
      encryption
      project_quota
      device_removal
      obsolete_counts
      zpool_checkpoint
      spacemap_v2
      allocation_classes
      resilver_defer
      bookmark_v2
      redaction_bookmarks
      redacted_datasets
      bookmark_written
      log_spacemap
      livelist
      device_rebuild
      zstd_compress
      draid
zroot
      userobj_accounting
      encryption
      project_quota
      allocation_classes
      resilver_defer
      bookmark_v2
      redaction_bookmarks
      redacted_datasets
      bookmark_written
      log_spacemap
      livelist
      device_rebuild
      zstd_compress
      draid

There are LOTS of ZFS Feature Flags to be activated but if we want to have our ZFS pool keep compatible – we will have to stay away from it πŸ™‚

zfs-terminal

You may get impression that you miss a lot … but you do not miss that much. Here are the ZFS Feature Flags you can fully utilize.

# zpool get all compatible | grep -v disabled
NAME        PROPERTY                       VALUE                          SOURCE
compatible  size                           960M                           -
compatible  capacity                       0%                             -
compatible  altroot                        -                              default
compatible  health                         ONLINE                         -
compatible  guid                           5395735446052695775            -
compatible  version                        -                              default
compatible  bootfs                         -                              default
compatible  delegation                     on                             default
compatible  autoreplace                    off                            default
compatible  cachefile                      -                              default
compatible  failmode                       wait                           default
compatible  listsnapshots                  off                            default
compatible  autoexpand                     off                            default
compatible  dedupratio                     1.00x                          -
compatible  free                           960M                           -
compatible  allocated                      116K                           -
compatible  readonly                       off                            -
compatible  ashift                         0                              default
compatible  comment                        -                              default
compatible  expandsize                     -                              -
compatible  freeing                        0                              -
compatible  fragmentation                  0%                             -
compatible  leaked                         0                              -
compatible  multihost                      off                            default
compatible  checkpoint                     -                              -
compatible  load_guid                      17463015630652190527           -
compatible  autotrim                       off                            default
compatible  compatibility                  COMPATIBLE                     local
compatible  feature@async_destroy          enabled                        local
compatible  feature@empty_bpobj            enabled                        local
compatible  feature@lz4_compress           active                         local
compatible  feature@spacemap_histogram     active                         local
compatible  feature@enabled_txg            active                         local
compatible  feature@hole_birth             active                         local
compatible  feature@extensible_dataset     enabled                        local
compatible  feature@embedded_data          active                         local
compatible  feature@bookmarks              enabled                        local
compatible  feature@filesystem_limits      enabled                        local
compatible  feature@large_blocks           enabled                        local

You get the very decent LZ4 compression and also ZFS Bookmarks feature which are very useful feature for sync|recv mechanism.

Keep in mind that the -o compatibility= switch for zpool(8) is available on OpenZFS 2.1 or newer. The 2.1 version is already available on the FreeBSD 13.1-BETA* releases and will be part of the FreeBSD 13.1-RELEASE systems. To make use of it on older FreeBSD releases you will have to use openzfs and openzfs-kmod packages and also use the following settings in the /boot/loader.conf file.

From that one:

zfs_load=YES

Into that one:

zfs_load=NO
openzfs_load=YES

With these openzfs and openzfs-kmod packages and above settings in the /boot/loader.conf file you can use this OpenZFS 2.1 on FreeBSD 12.2 – on FreeBSD 12.3 – and on FreeBSD 13.0 … and of course on upcoming FreeBSD 13.1 release.

Not sure if I should have add anything more here. Feel free to remind me in the commends πŸ™‚

EOF

ZFS Boot Environments Revolutions

I do not have to remind you that I am a big fan of ZFS Boot Environments feature. From the time when I first used it on OpenSolaris and Solaris systems I was really fascinated by it. Bulletproof upgrades and changes to entire system … and it was possible more then decade ago. Like a Dream. Today with beadm(8) and bectl(8) tools and also the FreeBSD loader(8) the ZFS Boot Environments are first class and one of the main features of the FreeBSD operating system.

Back in the more ‘normal’ times (before C19) I was able to talk two times about ZFS Boot Environments. I hope I explained them well.

  • 1st in Poland at PBUG meeting – with presentation available HERE.
  • 2nd in Holland at NLUUG conference – with presentation available HERE.

I do not know any downsides of ZFS Boot Environments but if you would stick a gun into my head and make me find one – I would say that you still have to reboot(8) to change to the other BE. This is about to change …

Reroot Instead Reboot

What is reroot? Its the ability to switch to other root filesystem without the need for full system reboot. The loaded and running kernel stays the same of course – but this is the only downside. This feature is implemented in the reboot(8) command with -r argument.

As we can read in the FreeBSD 10.3-RELEASE Release Notes page:

The initial implementation of “reroot” support has been added to the reboot(8) utility, allowing the root filesystem to be mounted from a temporary source filesystem without requiring a full system reboot. (r293744) (Sponsored by The FreeBSD Foundation)

How can reroot be useful here? It will save you a lot of time when you did not updated the kernel. There are two types of update strategies when using the ZFS Boot Environments. You can create new BE (as a backup world that you can get back to) and update the running system. Then you can use checkrestart(1) to verify which processes should be restarted because either binaries or libraries has been updated.

checkrestart.1

The other way was to create new separate BE (while not touching the running one) and then mount it and update that new BE and reboot into it later. This created a need to reboot(8) but not anymore. Especially when you just update the packages with pkg(8) command.

beadm(8)

With the new reroot option of beadm(8) you will tell FreeBSD to reroot your running kernel into specified BE. It will definitely have less impact in virtual machines as they reboot quite fast but imagine saved time on a server class physical machine with about 10 minutes lost for BIOS POST messages and initialization … or personal desktop/laptop GELI encrypted system without the need to type in again the GELI password to decrypt it after reboot.

beadm.reroot

On the screenshot above I use the latest FreeBSD 13.1-BETA1 but it works the same on other production FreeBSD releases such as 12.3-RELEASE or 13.0-RELEASE. The new upgraded beadm(8) is available from its home at GitHub page here:

I will add that updated version to the FreeBSD Ports tree later along with updated man page later.

Usage

Usage of this new feature is quite simple. You type beadm reroot BENAME in the terminal and FreeBSD reroots into that BE without reboot. Takes about 9-10 seconds on my 11 years old ThinkPad W520 so it may be even faster on your more up to date system.

# beadm list
BE        Active Mountpoint  Space Created
12.3      -      -            9.5G 2021-10-18 13:14
13.0.p6   -      -           13.9G 2022-01-27 11:07
13.0      -      -           12.9G 2022-03-05 15:02
13.0.safe -      -            2.8M 2022-03-08 14:54
13.1      NR     /            9.5G 2022-03-12 00:18
13.1.safe -      -          544.0M 2022-03-13 23:18

# beadm activate 13.1.safe
Activated successfully

# beadm reroot 13.1.safe

… and you are going the route similar to typing shutdown now on a running system. All services are stopped. Then root is changed to new one. Then system continues to boot along with starting all its services as usual. Just without the BIOS POST and the bootloader and kernel parts.

The reroot feature is especially useful in one of these scenarios:

From what I know the bectl(8) does not has that reroot feature but maybe it will be added to it somewhere in the future.

Summary

Not sure that ZFS Boot Environments Revolutions is the best title for this blog post, but as I used Reloaded on my 2nd ZFS Boot Environments presentation I though that sticking to The Matrix (1999) schema. I could of course do 3rd and updated presentation … but I am afraid that it will not happen … or at least not soon.

I did not thought that the FreeBSD Enterprise Storage presentation that I gave at 2020/02 PBUG would be my last – it was more then 2 years ago.

UPDATE 1 – Faster Upgrade with New beadm(8) Version

Today (2022/05/06) I introduced new beadm(8) version 1.3.5 that comes with new chroot(8) feature. It has already been committed to the FreeBSD Ports tree under 263805 PR so expect packages being available soon.

You can also update beadm(8) directly like that:

# fetch -o /usr/local/sbin/beadm https://raw.githubusercontent.com/vermaden/beadm

Now for the faster update process – here are the instructions depending on the shell you use.

  • ZSH / CSH
# beadm create 13.1-RC6
# beadm chroot 13.1-RC6
BE # zsh || csh
BE # yes | freebsd-update upgrade -r 13.1-RC6
BE # repeat 3 freebsd-update install
BE # exit
# beadm activate 13.1-RC6
# reboot
  • SH / BASH / FISH / KSH
# beadm create 13.1-RC6
# beadm chroot 13.1-RC6
BE # sh || bash || fish || ksh
BE # yes | freebsd-update upgrade -r 13.1-RC6
BE # seq 3 | xargs -I- freebsd-update install
BE # exit
# beadm activate 13.1-RC6
# reboot

Happy upgrading πŸ™‚

EOF

Other FreeBSD Version in ZFS Boot Environment

The first FreeBSD 12.3-PRERELEASE snapshots are finally available. This means we can try them in a new ZFS Boot Environment without touching out currently running 13.0-RELEASE system. We can not take the usual path with creating new BE from our current one and upgrade it to newer version because 12.3 has older major version then the 13.0 one.

This is kinda a paradox in the FreeBSD release process that when released the 12.3-RELEASE will have some newer commits and features then older 13.0-RELEASE which was released earlier this year. Of course not all things that have been committed to HEAD goes into 12-STABLE or 13-STABLE automatically – but most of them do. Only the biggest changes will be limited only to 14.0-RELEASE – of course probably somewhere in the middle of 2022 when it will be having its release process.

One note about ZFS filesystem on FreeBSD. People often confuse ‘real’ ZFS Boot Environments with its trying-to-be substitutes like BTRFS snapshots or snapshots used by Ubuntu with zsysctl(8) command. Unfortunately they are only snapshots and are not full writable clones (or entire separate ZFS datasets). They can freeze your system in time so you will be able to get back to working configuration after updating packages for example – but You will not be able to install other separate version of a system as other ZFS dataset making it another independent ZFS Boot Environment.

Create New ZFS Dataset

host # beadm list
BE             Active Mountpoint  Space Created
13.0.w520      NR     /           12.8G 2021-09-14 17:27
13.0.w520.safe -      -            1.2G 2021-10-18 10:01

host # zfs list -r zroot/ROOT
NAME                        USED  AVAIL     REFER  MOUNTPOINT
zroot/ROOT                 12.8G  96.8G       88K  none
zroot/ROOT/13.0.w520       12.8G  96.8G     11.6G  /
zroot/ROOT/13.0.w520.safe     8K  96.8G     11.1G  /

host # zfs create -o mountpoint=/ -o canmount=off zroot/ROOT/12.3

host # beadm list
BE             Active Mountpoint  Space Created
13.0.w520      NR     /           12.8G 2021-09-14 17:27
13.0.w520.safe -      -            1.2G 2021-10-18 10:01
12.3           -      -           96.0K 2021-10-18 13:14

Install FreeBSD 12.3-PRERELEASE

host # beadm mount 12.3 /var/tmp/12.3
Mounted successfully on '/var/tmp/12.3'

host # beadm list
BE             Active Mountpoint     Space Created
13.0.w520      NR     /              12.8G 2021-09-14 17:27
13.0.w520.safe -      -               1.2G 2021-10-18 10:01
12.3           -      /var/tmp/12.3  96.0K 2021-10-18 13:14

host # curl -o - https://download.freebsd.org/ftp/snapshots/amd64/12.3-PRERELEASE/base.txz \
         | tar --unlink -xpf - -C /var/tmp/12.3
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  173M  100  173M    0     0  1889k      0  0:01:33  0:01:33 --:--:-- 2228k

host # exa -1 /var/tmp/12.3
bin
boot
dev
etc
lib
libexec
media
mnt
net
proc
rescue
root
sbin
tmp
usr
var
COPYRIGHT
sys

host # curl -o - https://download.freebsd.org/ftp/snapshots/amd64/12.3-PRERELEASE/kernel.txz \
         | tar --unlink -xpf - -C /var/tmp/12.3
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 43.3M  100 43.3M    0     0  1733k      0  0:00:25  0:00:25 --:--:-- 1663k

host # exa -lh /var/tmp/12.3/boot/kernel/kernel
Permissions Size User Date Modified    Name
.r-xr-xr-x   37M root 2021-10-14 06:31 /var/tmp/12.3/boot/kernel/kernel

host # curl -o - https://download.freebsd.org/ftp/snapshots/amd64/12.3-PRERELEASE/lib32.txz \
         | tar --unlink -xpf - -C /var/tmp/12.3

host # exa -ld /var/tmp/12.3/usr/lib32
drwxr-xr-x - root 2021-10-18 13:45 /var/tmp/12.3/usr/lib32

Install Same Packages as on Host

With the pkg prime-list we will get all installed by hand pkg(8)packages from our currently running system. You may omit this section or just install packages that you need instead all of them.

host # pkg prime-list > /var/tmp/12.3/pkg.prime-list

host # chroot /var/tmp/12.3 /bin/sh

(BE) # export PS1="BE # "

BE # mount -t devfs devfs /dev

BE # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

BE # pkg install -y $( cat pkg.prime-list )
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
Installing pkg-1.17.2...
Extracting pkg-1.17.2: 100%
Updating FreeBSD repository catalogue...
Fetching meta.conf: 100%    163 B   0.2kB/s    00:01
Fetching packagesite.pkg: 100%    6 MiB   1.3MB/s    00:05
Processing entries: 100%
FreeBSD repository update completed. 31294 packages processed.
All repositories are up to date.
Updating database digests format: 100%
pkg: No packages available to install matching 'chromium' have been found in the repositories
pkg: No packages available to install matching 'drm-fbsd13-kmod' have been found in the repositories
pkg: No packages available to install matching 'geany-gtk2' have been found in the repositories
pkg: No packages available to install matching 'ramspeed' have been found in the repositories
pkg: No packages available to install matching 'vim-console' have been found in the repositories

As we can see some of the packages that we have installed in the FreeBSD 13.0-RELEASE system are not currently available in the ‘latestpkg(8) branch for the FreeBSD 12.3-PRERELEASE system. This sometimes happens when the build of such package will fail – but you may assume that such package will be available in a week or so as that is the period in which pkg(8) packages are (re)build in the ‘latest‘ branch.

We will now remove the missed packages and also rename some packages that may have different names for 12.x version of FreeBSD.

BE # sed -i '' \
         -e s/drm-fbsd13-kmod/drm-kmod/g \
         -e s/geany-gtk2/geany/g \
         -e s/vim-console/vim-tiny/g \
         pkg.prime-list

BE # pkg install -y $( cat pkg.prime-list | grep -v -e chromium -e ramspeed )
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1072 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        (...)

Number of packages to be installed: 1072

The process will require 11 GiB more space.
2 GiB to be downloaded.
(...)

BE # rm pkg.prime-list

After hour or so later our packages have been installed.

BE # pkg stats
Local package database:
        Installed packages: 1073
        Disk space occupied: 11 GiB

Remote package database(s):
        Number of repositories: 1
        Packages available: 31294
        Unique packages: 31294
        Total size of packages: 96 GiB

Copy Configuration Files

You can now reboot to plain and unconfigured FreeBSD system but you may as well copy your configuration files from your current working installation. These are the files I have copied.

First files from the Base System /etc and /boot places.

host # for I in /boot/loader.conf \
                /etc/hosts \
                /etc/fstab \
                /etc/rc.conf \
                /etc/sysctl.conf \
                /etc/wpa_supplicant.conf \
                /etc/jail.conf \
                /etc/devfs.rules \
                /etc/resolv.conf
       do
         cp "${I}" /var/tmp/12.3/"${I}"
         echo "${I}"
       done
/boot/loader.conf
/etc/hosts
/etc/fstab
/etc/rc.conf
/etc/sysctl.conf
/etc/wpa_supplicant.conf
/etc/jail.conf
/etc/devfs.rules
/etc/resolv.conf

Now the files for installed packages under /usr/local/etc dir.

host # for I in /usr/local/etc/X11/xorg.conf.d/* \
                /usr/local/etc/X11/xdm/{Xresources,Xsetup_0} \
                /usr/local/etc/automount.conf \
                /usr/local/etc/sudoers \
                /usr/local/etc/doas.conf \
                /usr/local/etc/zshrc
       do
         cp "${I}" /var/tmp/12.3/"${I}"
         echo "${I}"
       done
/usr/local/etc/X11/xorg.conf.d/card.conf
/usr/local/etc/X11/xorg.conf.d/flags.conf
/usr/local/etc/X11/xorg.conf.d/keyboard.conf
/usr/local/etc/X11/xorg.conf.d/touchpad.conf
/usr/local/etc/X11/xdm/Xresources
/usr/local/etc/X11/xdm/Xsetup_0
/usr/local/etc/automount.conf
/usr/local/etc/sudoers
/usr/local/etc/doas.conf
/usr/local/etc/zshrc

Add Users and Set Passwords

You should now add your regular user and set passwords for both your user and root account.

BE # pw useradd vermaden -u 1000 -d /home/vermaden -G wheel,operator,video,network,webcamd,vboxusers

BE # passwd root

BE # passwd vermaden

Reboot Into New ZFS Boot Environment

You may now exit the chroot(8) of that ZFS Boot Environment and reboot. In the FreeBSD loader(8) menu select the 12.3 boot environment.

BE # exit

host # umount /var/tmp/12.3/dev

host # beadm unmount 12.3
Unmounted successfully

host # beadm list -D
BE             Active Mountpoint  Space Created
13.0.w520      NR     /           11.3G 2021-09-14 17:27
13.0.w520.safe -      -           11.1G 2021-10-18 10:01
12.3        -      -            9.5G 2021-10-18 13:14

host # shutdown -r now

Testing New System

The 12.3-PRERELEASE system started fine for me. I was able to login and use system as usual. One important thing to note … the ZFS pools. I have another newer ZFS pool with zstd compression enabled … and I was not able to import that ZFS pool as FreeBSD 12.3-PREELEASE does not use OpenZFS 2.0 but an older FreeBSD in-house ZFS version.

# zpool import data
This pool uses the following feature(s) not supported by this system:
        org.freebsd:zstd_compress (zstd compression algorithm support.)
        com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
        org.zfsonlinux:project_quota (space/object accounting based on project ID.)
        org.zfsonlinux:userobj_accounting (User/Group object accounting.)
cannot import 'data': unsupported version or feature

Keep that in mind … but you can also install newer OpenZFS from the FreeBSD Ports and this is what we will now do.

# pkg install -y openzfs openzfs-kmod
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 2 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        openzfs: 2021090800
        openzfs-kmod: 2021090800

Number of packages to be installed: 2

The process will require 22 MiB more space.
4 MiB to be downloaded.
[1/2] Fetching openzfs-2021090800.pkg: 100%    3 MiB 975.3kB/s    00:03
[2/2] Fetching openzfs-kmod-2021090800.pkg: 100%    1 MiB 591.2kB/s    00:02
Checking integrity... done (0 conflicting)
[1/2] Installing openzfs-kmod-2021090800...
[1/2] Extracting openzfs-kmod-2021090800: 100%
pkg: Cannot open /dev/null:No such file or directory
[2/2] Installing openzfs-2021090800...
[2/2] Extracting openzfs-2021090800: 100%
=====
Message from openzfs-kmod-2021090800:

--
Amend /boot/loader.conf as follows to use this module:

- change zfs_load="YES" to NO
- change opensolaris_load="YES" to NO
- add openzfs_load="YES"
- (for ARM64) add cryptodev_load="YES"
=====
Message from openzfs-2021090800:

--
Ensure that any zfs-related commands, such as zpool, zfs, as used in scripts
and in your terminal sessions, use the correct path of /usr/local/sbin/ and
not the /sbin/ commands provided by the FreeBSD base system.

Consider setting this in your shell profile defaults!

We will now have to modify our /boot/loader.conf file.

host # beadm mount 12.3 /var/tmp/12.3
Mounted successfully on '/var/tmp/12.3'

host # chroot /var/tmp/12.3

BE # cp /boot/loader.conf /boot/loader.conf.ZFS

BE # vi /boot/loader.conf

BE # diff -u /boot/loader.conf.ZFS /boot/loader.conf
--- /boot/loader.conf.ZFS       2021-10-19 10:57:04.180732000 +0000
+++ /boot/loader.conf   2021-10-19 10:57:23.992145000 +0000
@@ -12,7 +12,8 @@

 # MODULES - BOOT
   geom_eli_load=YES
-  zfs_load=YES
+  zfs_load=NO
+  openzfs_load=YES

 # DISABLE /dev/diskid/* ENTRIES FOR DISKS
   kern.geom.label.disk_ident.enable=0

BE # shutdown -r now

After reboot and trying again I was able to import that newer ZFS pool.

Hope that you will find that guide useful.

Feel free to add your suggestions.

UPDATE 1 – Notes When Installing Newer Version

This guide was written when I tried FreeBSD 12.3 on a previously used by FreeBSD 13.0 system so bootcode was not needed to be updated. I just tried 13.1 on the same 13.0 system and these two steps are needed to updated the bootcode.

UEFI

For UEFI partition you will need to copy /boot/loader.efi file from the 13.1 installation which means /var/tmp/13.1 dir. Here is the command to be used.

host # gpart show -p ada1
=>       40  250069600    ada1  GPT  (119G)
         40     409600  ada1p1  efi  (200M)          <== UEFI BOOT PARTITION
     409640       1024  ada1p2  freebsd-boot  (512K) <== BIOS BOOT PARTITION
     410664        984          - free -  (492K)
     411648    2097152  ada1p3  freebsd-swap  (1.0G)
    2508800  247560192  ada1p4  freebsd-zfs  (118G)
  250068992        648          - free -  (324K)

host # mount_msdosfs /dev/ada1p1 /mnt

host # cp /var/tmp/13.1/boot/loader.efi /mnt/efi/boot/bootx64.efi

BIOS

For the systems that boot in legacy/BIOS mode you will use this gpart(8) command instead.

host # cd /var/tmp/13.1/boot
host # pwd
/var/tmp/13.1/boot
host # gpart bootcode -b ./pmbr -p ./gptzfsboot -i 2 ada1
partcode written to ada1p2
bootcode written to ada1

As FreeBSD often is installed as BIOS+UEFI boot capable both of these steps would be needed.

EOF