MinIO is a high-performance, S3-compatible object storage platform designed for scalability, resilience, and simplicity in modern cloud-native environments. Its lightweight architecture and impressive throughput make it a popular choice for both on-premises and hybrid deployments, especially when building distributed storage clusters. In this post, I’ll briefly introduce MinIO and then walk through the test environment of a six-node MinIO cluster, exploring how it behaves, performs, and scales in a real-world lab setup.
This test environment is part of potential 1 PB+ S3 Storage system. The conceptual design of system is depicted below.
![]() |
| Conceptual Design - S3 Object Storage |
Proof of Concept is always good idea before the final system is professionally designed. In this blog post I described the first ultra small virtualized environment to test MinIO concept.
MinIO Lab Environment
Let's describe our lab environment.
MinIO Node HW Specification
- 2x vCPU
- 4 GB RAM
- 2x Disk
- 8 GB for OS - /dev/da0
- 1 GB for SLOG - /dev/da1
- 1 GB for SLOG - /dev/da2
- 1 GB for L2ARC - /dev/da3
- 10 GB for Data - /dev/da4
- 1x NIC
- vmx0
MinIO Node SW Specification
- FreeBSD 14.3
- MinIO
- Version: (minio --version)
- RELEASE.2025-10-15T17-29-55Z
- Runtime: go1.24.9 freebsd/amd64
IP Plan
- MinIO IP Addresses
- minio-01: 192.168.8.51
- minio-02: 192.168.8.52
- minio-03: 192.168.8.53
- minio-04: 192.168.8.54
- minio-05: 192.168.8.55
- minio-06: 192.168.8.56
Single Node Installation Procedure
In this section, we will install single node MinIO.
FreeBSD Installation
FreeBSD installation is out of scope.
FreeBSD update procedure ...
Preparation of ZFS
# Make ZFS volume on drive /dev/da4 for MinIO-DATA
zpool create MINIO-DATA da4
# Create mirrored SLOG (write cache) on /dev/da1 and /dev/da2
zpool add MinIO-DATA log mirror /dev/da1 /dev/da2
# Add L2ARC (read cache) on /dev/da3
zpool add MinIO-DATA cache /dev/da3
Here is the zpool status ...
root@minio-01:~ #zpool statuspool: MINIO-DATA state: ONLINE config: NAME STATE READ WRITE CKSUM MINIO-DATA ONLINE 0 0 0 da4 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 cache da3 ONLINE 0 0 0 errors: No known data errors pool: zroot state: ONLINE config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 da0p3 ONLINE 0 0 0 errors: No known data errors
# Create zfs
zfs create MinIO-DATA/minio-datastore
# Recommended ZFS features for MinIO
zfs set compression=lz4 MinIO-DATA/minio-datastore
zfs set atime=off MinIO-DATA/minio-datastore
zfs set recordsize=1M MinIO-DATA/minio-datastore
zfs set redundant_metadata=most MinIO-DATA/minio-datastore
zfs set logbias=latency MinIO-DATA/minio-datastore
Explanation:
- recordsize=1M - matches MinIO large object I/O
- atime=off - fewer metadata writes
- redundant_metadata=most - better performance
- logbias=latency - maximize sync-write performance because we use SLOG (write cache)
Here is the list of ZFS ...
root@minio-01:~ #zfs listNAME USED AVAIL REFER MOUNTPOINT MINIO-DATA 672K 9.20G 96K /MINIO-DATA MINIO-DATA/minio-datastore 96K 9.20G 96K /MINIO-DATA/minio-datastore zroot 1.92G 3.41G 96K /zroot zroot/ROOT 1.92G 3.41G 96K none zroot/ROOT/14.3-RELEASE_2025-11-15_130359 8K 3.41G 1.67G / zroot/ROOT/default 1.92G 3.41G 1.77G / zroot/home 224K 3.41G 96K /home zroot/home/dpasek 128K 3.41G 128K /home/dpasek zroot/tmp 96K 3.41G 96K /tmp zroot/usr 288K 3.41G 96K /usr zroot/usr/ports 96K 3.41G 96K /usr/ports zroot/usr/src 96K 3.41G 96K /usr/src zroot/var 640K 3.41G 96K /var zroot/var/audit 96K 3.41G 96K /var/audit zroot/var/crash 96K 3.41G 96K /var/crash zroot/var/log 152K 3.41G 152K /var/log zroot/var/mail 104K 3.41G 104K /var/mail zroot/var/tmp 96K 3.41G 96K /var/tmp root@minio-01:~ #
Install and Enable MinIO
There is the minio package
root@minio-01:~ #pkg search miniominio-2025.10.15.17.29.55_1 Amazon S3 compatible object storage server
Start MinIO
service minio start
Configure and Validate MinIO command line client
Let's configure and minio-client to validate status of our MinIO object storage.
root@minio-01:~ #minio-client admin info local● localhost:9000 Uptime: 10 hours Version: 2025-10-15T17:29:55Z Network: 1/1 OK Drives: 1/1 OK Pool: 1 ┌──────┬───────────────────────┬─────────────────────┬──────────────┐ │ Pool │ Drives Usage │ Erasure stripe size │ Erasure sets │ │ 1st │ 0.0% (total: 9.2 GiB) │ 1 │ 1 │ └──────┴───────────────────────┴─────────────────────┴──────────────┘ 1 drive online, 0 drives offline, EC:0 root@minio-01:~ #
MinIO Web Console
You can use MinIO Web Console at http://192.168.8.51:9000
Cluster (6-node) Installation Procedure
We have a working single-node MinIO, so let's document procedure how to move to a 6-node distributed MinIO cluster with erasure coding 4+2.
Standalone vs Cluster Configuration
Standalone configuration is on a single server. We are targeting 6-node MinIO Cluster architecture.
A 4+2 setup means:
- 4 data blocks
- 2 parity blocks
- total 6 drives per erasure set
To achieve a 6-node MinIO cluster with 4+2 EC, each node must contribute at least one disk/path on different server to the set.
6-Node MinIO Cluster requires:
- At least N = 6 storage endpoints
- All endpoints must be listed on every node
Each node provides one volume (for simple design):
On each MinIO FreeBSD server, we must change local minio_disks to network minio_disks.
Old (single node) config:
sysrc minio_disks="MINIO-DATA/minio-datastore"
New (6-node cluster) config:
Add following lines to /etc/rc.conf
minio_disks="\
http://192.168.8.51:9000/MINIO-DATA/minio-datastore \
http://192.168.8.52:9000/MINIO-DATA/minio-datastore \
http://192.168.8.53:9000/MINIO-DATA/minio-datastore \
http://192.168.8.54:9000/MINIO-DATA/minio-datastore \
http://192.168.8.55:9000/MINIO-DATA/minio-datastore \
http://192.168.8.56:9000/MINIO-DATA/minio-datastore
"
Warning!!! Do not use sysrc for above config, because sysrc does not support multi-line format with \ at the end of line, and it screw up the whole /etc/rc.conf file"
Cleaning old MinIO Standalone config and data
If you tested MinIO in a standalone mode, you have to remove MinIO system configuration and data on each particular server.
Reapplying Configuration
Reapplying cluster configuration is easy, just restarting the minio service. I actually stopped all minio services across 6 servers and after configuration change start the service again.
Cluster Status Verification
If everything went correctly, we can check MinIO cluster status ...
root@minio-01:~ #minio-client admin info local● 192.168.8.51:9000 Uptime: 8 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ● 192.168.8.52:9000 Uptime: 7 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ● 192.168.8.53:9000 Uptime: 15 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ● 192.168.8.54:9000 Uptime: 15 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ● 192.168.8.55:9000 Uptime: 7 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ● 192.168.8.56:9000 Uptime: 7 minutes Version: 2025-10-15T17:29:55Z Network: 6/6 OK Drives: 1/1 OK Pool: 1 ┌──────┬──────────────────────┬─────────────────────┬──────────────┐ │ Pool │ Drives Usage │ Erasure stripe size │ Erasure sets │ │ 1st │ 0.0% (total: 28 GiB) │ 6 │ 1 │ └──────┴──────────────────────┴─────────────────────┴──────────────┘ 6 drives online, 0 drives offline, EC:3 root@minio-01:~ #
NGINX as a Load Balancer in front of MinIO Cluster
MinIO can be access via each cluster node, however, single IP address with load balancer spreading the S3 load to all 6 MinIO nodes should be used. Let's use NGINX as a Load Balancer.
Set up VM (2 vCPU, 2 GB RAM, 8 GB disk) just for load balancer and install FreeBSD 14.3 with full nginx. Use IP address 192.168.8.50. This will be used as an S3 Storage preferred end-point.
pkg install -y nginx-full
user www;
worker_processes auto;
events {
worker_connections 8192;
use kqueue;
}
stream {
# ---- S3 API ----
upstream minio_s3 {
least_conn;
server 192.168.8.51:9000;
server 192.168.8.52:9000;
server 192.168.8.53:9000;
server 192.168.8.54:9000;
server 192.168.8.55:9000;
server 192.168.8.56:9000;
}
server {
listen 9000;
proxy_timeout 2h;
proxy_pass minio_s3;
}
# ---- MinIO Console ----
upstream minio_console {
least_conn;
server 192.168.8.51:9001;
server 192.168.8.52:9001;
server 192.168.8.53:9001;
server 192.168.8.54:9001;
server 192.168.8.55:9001;
server 192.168.8.56:9001;
}
server {
listen 9001;
proxy_timeout 1h;
proxy_pass minio_console;
}
}
NGINX
FreeBSD Tuning Profile for S3 high-throughput
Kernel & TCP Tuning (CRITICAL)
/etc/sysctl.conf# ---------- Socket backlog ----------
kern.ipc.somaxconn=65535
# ---------- TCP buffers ----------
net.inet.tcp.sendspace=1048576
net.inet.tcp.recvspace=1048576
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# ---------- TCP performance ----------
net.inet.tcp.msl=15000
net.inet.tcp.fastopen.server_enable=1
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
# ---------- Network queues ----------
net.inet.tcp.syncache.hashsize=4096
net.inet.tcp.syncache.bucketlimit=128
# ---------- NIC offloading ----------
net.inet.tcp.tso=1
net.inet.tcp.lro=1
net.inet.ip.check_interface=0
# ---------- Max mbufs ----------
kern.ipc.nmbclusters=1048576
You can apply it by command
sysctl -f /etc/sysctl.conf
Loader Tuning (BOOT-TIME)
/boot/loader.conf# Increase mbufs early
kern.ipc.nmbclusters=1048576
# Enable RSS (multi-queue NICs)
net.inet.rss.enabled=1
net.inet.rss.bits=6
# Reduce latency
hw.intr_storm_threshold=1000
Reboot the system to apply above settings.
Network Interface Tuning (VERY IMPORTANT)
Enable LRO (Large Receive Offload) and TCP Segmentation Offload (TSO) ...
ifconfig vmx0 lro tso
You should set it to interface in /etc/rc.conf
Logical Design
Here are few schemas from Logical Design of potential S3 Object Storage.
![]() |
| Logical Design - Capacity Planning |
![]() |
| Logical Design - Capacity Planning - Multiple Datastores |
Read / Write caches are currently designed but it might or may not be used in the final design based on performance testing on real hardware.
![]() |
| Logical Design - 100 Gb Networking |
100 Gb CLOS (Leaf-Spine) Network Topology is used for distributed storage system.
Conclusion
This is just simple implementation of 6-node MinIO object storage in test environment. More advanced architectures can be planned, designed and implemented. However, this is good foundation for anything more advanced.
It is worth to mention that MinIO is not the only S3 compatible Open-Source Storage Solution. There are other very interesting projects.
Let's mention two MinIO Alternatives
- SeaweedFS
- RustFS
I will definitely test above two object storage systems in the future.
Hope this is beneficial for other hackers in FreeBSD community.




No comments:
Post a Comment