Saturday, November 29, 2025

MinIO on FreeBSD

MinIO is a high-performance, S3-compatible object storage platform designed for scalability, resilience, and simplicity in modern cloud-native environments. Its lightweight architecture and impressive throughput make it a popular choice for both on-premises and hybrid deployments, especially when building distributed storage clusters. In this post, I’ll briefly introduce MinIO and then walk through the test environment of a six-node MinIO cluster, exploring how it behaves, performs, and scales in a real-world lab setup.

MinIO Lab Environment

Let's describe our lab environment. 

MinIO Node HW Specification

  • 2x vCPU
  • 4 GB RAM
  • 2x Disk
    • 8 GB for OS - /dev/da0
    • 1 GB for SLOG - /dev/da1
    • 1 GB for SLOG - /dev/da2
    • 1 GB for L2ARC - /dev/da3
    • 10 GB for Data - /dev/da4
  • 1x NIC
    • vmx0 

MinIO Node SW Specification

  • FreeBSD 14.3
  • MinIO
    • Version: (minio --version)
      • RELEASE.2025-10-15T17-29-55Z
      • Runtime: go1.24.9 freebsd/amd64

 IP Plan

  • MinIO IP Addresses
    • minio-01: 192.168.8.51
    • minio-02: 192.168.8.52
    • minio-03: 192.168.8.53 
    • minio-04: 192.168.8.54 
    • minio-05: 192.168.8.55 
    • minio-06: 192.168.8.56  

Single Node Installation Procedure

In this section, we will install single node MinIO. 

FreeBSD Installation

FreeBSD installation is out of scope.

FreeBSD update procedure ...

freebsd-update fetch
freebsd-update install
freebsd-version -kru
restart
 

Preparation of ZFS 

# Make ZFS volume on drive /dev/da4 for MinIO-DATA
zpool create MINIO-DATA da4

# Create mirrored SLOG (write cache) on /dev/da1 and /dev/da2
zpool add MinIO-DATA log mirror /dev/da1 /dev/da2

# Add L2ARC (read cache) on /dev/da3
zpool add MinIO-DATA cache /dev/da3

Here is the zpool status ...

 root@minio-01:~ # zpool status  
   pool: MINIO-DATA  
  state: ONLINE  
 config:  
      NAME       STATE   READ WRITE CKSUM  
      MINIO-DATA ONLINE     0     0     0  
        da4      ONLINE     0     0     0  
      logs       
        mirror-1 ONLINE     0     0     0  
          da1    ONLINE     0     0     0  
          da2    ONLINE     0     0     0  
       cache  
          da3    ONLINE     0     0     0  
 errors: No known data errors  
   pool: zroot  
  state: ONLINE  
 config:  
       NAME      STATE   READ WRITE CKSUM  
       zroot     ONLINE     0     0     0  
         da0p3   ONLINE     0     0     0  
 errors: No known data errors  

# Create zfs 
zfs create MinIO-DATA/minio-datastore

# Recommended ZFS features for MinIO

zfs set compression=lz4 MinIO-DATA/minio-datastore
zfs set atime=off MinIO-DATA/minio-datastore
zfs set recordsize=1M MinIO-DATA/minio-datastore
zfs set redundant_metadata=most MinIO-DATA/minio-datastore
zfs set logbias=latency MinIO-DATA/minio-datastore

Explanation:

  • recordsize=1M - matches MinIO large object I/O
  • atime=off - fewer metadata writes
  • redundant_metadata=most - better performance
  • logbias=latency - maximize sync-write performance because we use SLOG (write cache)
# Set permissions
chown -R minio:minio minio-datastore 

Here is the list of ZFS ...

 root@minio-01:~ # zfs list   
 NAME                                         USED AVAIL REFER MOUNTPOINT  
 MINIO-DATA                                   672K 9.20G  96K /MINIO-DATA  
 MINIO-DATA/minio-datastore                    96K 9.20G  96K /MINIO-DATA/minio-datastore  
 zroot                                       1.92G 3.41G  96K /zroot  
 zroot/ROOT                                  1.92G 3.41G  96K none  
 zroot/ROOT/14.3-RELEASE_2025-11-15_130359      8K 3.41G 1.67G /  
 zroot/ROOT/default                          1.92G 3.41G 1.77G /  
 zroot/home                                   224K 3.41G  96K /home  
 zroot/home/dpasek                            128K 3.41G  128K /home/dpasek  
 zroot/tmp                                     96K 3.41G  96K /tmp  
 zroot/usr                                    288K 3.41G  96K /usr  
 zroot/usr/ports                               96K 3.41G  96K /usr/ports  
 zroot/usr/src                                 96K 3.41G  96K /usr/src  
 zroot/var                                    640K 3.41G  96K /var  
 zroot/var/audit                               96K 3.41G  96K /var/audit  
 zroot/var/crash                               96K 3.41G  96K /var/crash  
 zroot/var/log                                152K 3.41G  152K /var/log  
 zroot/var/mail                               104K 3.41G  104K /var/mail  
 zroot/var/tmp                                 96K 3.41G  96K /var/tmp  
 root@minio-01:~ #  

Install and Enable MinIO

There is the minio package

 root@minio-01:~ # pkg search minio  
 minio-2025.10.15.17.29.55_1  Amazon S3 compatible object storage server  

pkg install -y minio
pkg instal -y minio-client
sysrc minio_enable="YES"
sysrc minio_disks="MINIO-DATA/minio-datastore"
sysrc minio_address=":9000"
sysrc minio_console_address=":9001"
sysrc minio_user="minio"
sysrc minio_root_user="admin"
sysrc minio_root_password="password"
sysrc minio_logfile="/var/log/minio.log"
 

Start MinIO 

service minio start

Configure and Validate MinIO command line client

Let's configure and minio-client to validate status of our MinIO object storage.

minio-client alias rm local
minio-client alias set local http://localhost:9000 admin password
 
# Make Bucket (mb) 
minio-client mb local/testbucket
 
# Copy local file into the bucket
minio-client cp /etc/rc.conf local/testbucket/
 
# Remove Bucket (rb) 
minio-client rb local/testbucket --force
 
# Get info about local MinIO
minio-client admin info local
 root@minio-01:~ # minio-client admin info local  
 ● localhost:9000  
   Uptime: 10 hours   
   Version: 2025-10-15T17:29:55Z  
   Network: 1/1 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ┌──────┬───────────────────────┬─────────────────────┬──────────────┐  
 │ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │  
 │ 1st  │ 0.0% (total: 9.2 GiB) │ 1                   │ 1            │  
 └──────┴───────────────────────┴─────────────────────┴──────────────┘  
 1 drive online, 0 drives offline, EC:0  
 root@minio-01:~ # 

MinIO Web Console

You can use MinIO Web Console at http://192.168.8.51:9000

Cluster (6-node) Installation Procedure 

We have a working single-node MinIO, so let's document procedure how to move to a 6-node distributed MinIO cluster with erasure coding 4+2.

Standalone vs Cluster Configuration

Standalone configuration is on a single server. We are targeting 6-node MinIO Cluster architecture.  

A 4+2 setup means:

  • 4 data blocks
  • 2 parity blocks
  • total 6 drives per erasure set

To achieve a 6-node MinIO cluster with 4+2 EC, each node must contribute at least one disk/path on different server to the set. 

6-Node MinIO Cluster requires:

  • At least N = 6 storage endpoints
  • All endpoints must be listed on every node

Each node provides one volume (for simple design):

Node      Volume Path
minio-01  http://192.168.8.51:9000/MINIO-DATA/minio-datastore
minio-02  http://192.168.8.52:9000/MINIO-DATA/minio-datastore
minio-03  http://192.168.8.53:9000/MINIO-DATA/minio-datastore
minio-04  http://192.168.8.54:9000/MINIO-DATA/minio-datastore
minio-05  http://192.168.8.55:9000/MINIO-DATA/minio-datastore
minio-06  http://192.168.8.56:9000/MINIO-DATA/minio-datastore 

On each MinIO FreeBSD server, we must change local minio_disks to network minio_disks.

Old (single node) config:

sysrc minio_disks="MINIO-DATA/minio-datastore" 

New (6-node cluster) config:

sysrc minio_disks=" \
http://192.168.8.51:9000/MINIO-DATA/minio-datastore \
http://
192.168.8.52:9000/MINIO-DATA/minio-datastore \
http://
192.168.8.53:9000/MINIO-DATA/minio-datastore \
http://
192.168.8.54:9000/MINIO-DATA/minio-datastore \
http://
192.168.8.55:9000/MINIO-DATA/minio-datastore \
http://
192.168.8.56:9000/MINIO-DATA/minio-datastore \

Cleaning old MinIO Standalone config and data

If you tested MinIO in a standalone mode, you have to remove MinIO system configuration and data on each particular server.

rm -rf /MINIO-DATA/minio-datastore/.minio.sys
rm -rf /MINIO-DATA/minio-datastore/*

Reapplying Configuration 

Reapplying cluster configuration is easy, just restarting the minio service. I actually stopped all minio services across 6 servers and after configuration change start the service again.

service minio stop
# change configurations on each particular cluster node
service start
 

Cluster Status Verification

If everything went correctly, we can check MinIO cluster status ...

 root@minio-01:~ # minio-client admin info local  
 ● 192.168.8.51:9000  
   Uptime: 8 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ● 192.168.8.52:9000  
   Uptime: 7 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ● 192.168.8.53:9000  
   Uptime: 15 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ● 192.168.8.54:9000  
   Uptime: 15 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ● 192.168.8.55:9000  
   Uptime: 7 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ● 192.168.8.56:9000  
   Uptime: 7 minutes   
   Version: 2025-10-15T17:29:55Z  
   Network: 6/6 OK   
   Drives: 1/1 OK   
   Pool: 1  
 ┌──────┬──────────────────────┬─────────────────────┬──────────────┐  
 │ Pool │ Drives Usage         │ Erasure stripe size │ Erasure sets │  
 │ 1st  │ 0.0% (total: 28 GiB) │ 6                   │ 1            │  
 └──────┴──────────────────────┴─────────────────────┴──────────────┘  
 6 drives online, 0 drives offline, EC:3  
 root@minio-01:~ #   

VIP with CARP

MinIO can be access via each cluster node, however, single Virtual IP (aka VIP) address should be used. This is where FreeBSD CARP come into play. We will use 6-node CARP cluster, leveraging all 6 servers used for MinIO cluster.

Node     IP Address    advskew    State (normal conditions)
node1    192.168.8.51  0          MASTER
node2    
192.168.8.52  10         BACKUP
node3    
192.168.8.53  20         BACKUP
node4    
192.168.8.54  30         BACKUP
node5    
192.168.8.55  40         BACKUP
node6    
192.168.8.56  50         BACKUP 

Virtual IP Address (VIP): 192.168.8.50

Rules:

  • Lowest advskew wins and becomes MASTER.
  • All others watch and stay BACKUP.
  • If node1 dies, node2 becomes MASTER.
  • If node2 dies, node3 becomes MASTER.
  • And so on.

Below is VIP Configuration on all 6 nodes of our cluster ...

Node1 - 192.168.8.51 
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
# Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 0"

# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf  
 
Node2 - 192.168.8.52 
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 10"
 
# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf  
 
Node3 - 192.168.8.53 
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 20"
 
# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf  
 
Node4 - 192.168.8.54 
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 30"
 
# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf  
 
Node5 - 192.168.8.55 
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 40"
 
# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf  
 
Node6 - 192.168.8.56
# Add carp_load="YES" to /boot/loader.conf ... 
echo carp_load=\"YES\" >> /boot/loader.conf
 
Add this alias to /etc/rc.conf ... 
ifconfig_vmx0_alias0="inet 192.168.8.50/24 vhid 1 pass mycarppass advskew 50"
 
# Add this to /etc/sysctl.conf
echo net.inet.carp.allow=1 >> /etc/sysctl.conf
echo net.inet.carp.preempt=1 >> /etc/sysctl.conf
echo net.inet.carp.log=1 >> /etc/sysctl.conf 

Note 1: When using CARP on VMware Virtual Networking, you have to allow Promiscuous modeMAC address changes, and Forged transmits

Note 2: CARP is simple solution to have VIP, but it is not load balanced solution. If we use VIP address 192.168.8.50 for S3 client, only single node will be used for S3 front-end operations. If you want to load balance S3 front-end operations across all 6 nodes, you must use load balancer like NGINX, HAProxy, etc.

Conclusion

This is just simple implementation of 6-node MinIO object storage. More advanced architectures can be planned, designed and implemented. However, this is good foundation for anything more advanced. 

It is worth to mention that MinIO is not the only S3 compatible Open-Source Storage Solution. There are other very interesting projects. 

Let's mention two MinIO Alternatives

I will definitely test above two object storage systems in the future. 

Hope this is beneficial for other hackers in FreeBSD community.

No comments:

Post a Comment