I need NFS 4.x storage for some tests. I would like to leverage FreeBSD 14.3 with ZFS and export two ZFS Datasets via NFS.
In this blog post, I will document how to setup NFS storage on FreeBSD.
FreeBSD is a clean, powerful, secure, and reliable Unix-like Operating System built for performance and freedom, and that’s why I’ve been happily nerding out with it for almost 30 years.
I need NFS 4.x storage for some tests. I would like to leverage FreeBSD 14.3 with ZFS and export two ZFS Datasets via NFS.
In this blog post, I will document how to setup NFS storage on FreeBSD.
ImpossibleCloud S3 storage is pretty interesting offering of European Object Storage. Cloud4com datacenters are located in Czechia, Prague and Impossible closest S3 Storage is in Frankfurt. ImpossibleCloud could be used for offsite backups or remote object repositories.
In this blog post, I will do a benchmark of ImpossibleCloud S3 storage accessed from Prague's Cloud4com datacenter. I have S3 client in Cloud4com vPDC in Prague and accessing ImpossibleCloud S3 Storage in Frankfurt.
Cloud4com physical datacenter is located at Prague, TTC
ImpossibleCloud S3 Storage is located at the “eu-central-2” region which corresponds to data centers in Germany (Europe / Frankfurt area).
Let's do some performance tests and report achieved results.
Why vmx0, vmx1, vmx2 interface names sometimes cause fear?
Anyone running FreeBSD as a router or firewall in a virtualized environment knows this situation well:
network interfaces are named vmx0, vmx1, vmx2, and critical configuration
(pf, routing, jails) depends on them. A small change can suddenly turn WAN into LAN and LAN into DMZ.
On physical hardware this is a common problem. Adding a PCI card can change device enumeration order. In VMware, the situation is much better, but it is still important to understand how to make interface naming stable and future-proof.
I have multiple FreeBSD routers across my environments across the world each having its own WAN (Internet) connectivity and using WireGuard VPN to connect all into a private network.
I would like to do
The solution is pretty simple and I will describe it on this blog post.
The default FreeBSD configuration is optimized for compatibility, not maximum network throughput. This becomes visible especially during iperf testing, routing benchmarks, or high-traffic workloads where mbuf exhaustion or CPU bottlenecks can occur. Let's discuss various turnings.