FreeBSD Enterprise Storage at PBUG

Yesterday I was honored to give a talk about FreeBSD Enterprise Storage at the Polish BSD User Group meeting.

You are invited to download the PDF Slides – https://is.gd/bsdstg – available here.

bsdstg

The PBUG (Polish BSD User Group) meetings are very special. In “The Matrix” movie (which has been rendered on FreeBSD system by the way) – FreeBSD Used to Generate Spectacular Special Effects – details available here – its not possible to describe what the Matrix really is, one has to feel it. Enter it. The same I can tell you about the PBUG meetings. Its kinda like with the “Hangover” movie. What happens in Vegas PBUG meeting stays in Vegas PBUG meeting πŸ™‚

If you will have the possibility and time then join the next Polish BSD User Group meeting. You will not regret it :>

UPDATE 1 – Shorter Unified Version

The original – https://is.gd/bsdstg – presentation is 187 pages long and is suited for live presentation while not the best for later ‘offline’ view.

I have created a unified version – https://is.gd/bsdstguni – with only 42 pages.

EOF

8 thoughts on “FreeBSD Enterprise Storage at PBUG

  1. Alex

    Thank you for this, vermaden.
    At work we work with NetApp FAS/AFF appliances, amongst other things. As a hobby I’d like to replicate some of the features with FreeBSD. I read through your corosync/pacemaker article and it’s very inspiring.
    I’m taking my first baby steps with ZFS on FreeBSD (even though I’ve been a user since 1999!) and storage.
    If I go about building a ZFS corosync/pacemaker cluster is there any type of automated tiering possible? Would I need to build SSD pools and SAS pools and separate data on the application layer? How would I go about dealing with hot/warm/cold data? I could imagine moving my cold data onto an off-site dc and leaving it on a minio system.
    Is there a way of clusters groups of corosync/pacemaker server clusters? ONTAP can do that, but is there a way of doing that with FreeBSD? How about SnapMirror/SnapBackup features, is there something similar?
    I’m going to try and set some things up as VMs but maybe you’ve done some POCs in the past that may help me.
    Cheers!

    Like

    Reply
    1. vermaden Post author

      Hi,

      that’s a lot of questions πŸ™‚

      If I go about building a ZFS corosync/pacemaker cluster is there any type of automated tiering possible? (…) How would I go about dealing with hot/warm/cold data? I could imagine moving my cold data onto an off-site dc and leaving it on a minio system.

      The ZFS does not have tiering. You will have to write your own set of scripts to check for cold/hot data (or datasets) and move them by yourself with these scripts.

      Would I need to build SSD pools and SAS pools and separate data on the application layer?

      You will just create two separate ZFS pool, one named ‘ssd’ on SSD disks and another one named ‘sas’ on SAS disks for example.

      Is there a way of clusters groups of corosync/pacemaker server clusters?

      You mean cluster of clusters? I am not sure I understand this question πŸ™‚

      How about SnapMirror/SnapBackup features, is there something similar?

      I do not know SnapMirror and SnapBackup features but maybe ggated(8)/ggatec(8)/hastd(8) will help here? … or mirror on ZFS.

      I’m going to try and set some things up as VMs but maybe you’ve done some POCs in the past that may help me.

      Good luck and please share your results. I did not do anything more then I described in the article currently.

      Regards.

      Like

      Reply
      1. Alex

        Hey,

        Thank you for replying to my long list of questions. I’m sorry I couldn’t reply sooner. I really appreciate you taking the time to answer!

        “You will have to write your own set of scripts to check for cold/hot data (or datasets) and move them by yourself with these scripts.”

        I think that could work, separating pools into SSD and SAS and scripting things. I’m not a huge fan of extra work like that because it adds to the complexity and failures but maybe I could bundle that into VMs or template for jails, snapshot ZFS, etc. I’ll have to think about the amount of man hours that would need to go into developing it.

        “You mean cluster of clusters? I am not sure I understand this question “

        Oops I see my typos. But yes, exactly. I would put groups of two into larger clusters, to scale out horizontally. In earlier NetApps you could only cluster two controllers together. Then later you could cluster groups of two into large clusters, even at different DCs (within reason). I’d love to be able to do something similar with FreeBSD. I’ll POC this and report back.

        Like

      2. vermaden Post author

        I think that could work, separating pools into SSD and SAS and scripting things. I’m not a huge fan of extra work like that because it adds to the complexity and failures but maybe I could bundle that into VMs or template for jails, snapshot ZFS, etc. I’ll have to think about the amount of man hours that would need to go into developing it.

        You can always make this static – for example you will know which data is ‘hot’ and which is ‘not’ but it will take time πŸ™‚

        You can also start with putting everything into SAS pool and then moving only the most heavy IOPS data to the SSD pool by hand.

        Please share whatever you come out with. I am curious what You will out of this πŸ™‚

        Regards.

        Like

  2. Pingback: Sensors Information on FreeBSD | πšŸπšŽπš›πš–πšŠπšπšŽπš—

  3. Pingback: ZFS Boot Environments Revolutions | πšŸπšŽπš›πš–πšŠπšπšŽπš—

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s