In this blog post, I will install and configure FreeBSD/Bhyve to set up a FreeBSD virtualization host. I use FreeBSD 14.3. The installation of FreeBSD and the preparation of networking and storage are not covered here, as they are already in place and described in my other blog posts.
Let’s explore the installation and configuration of Bhyve, a process that is simple and straightforward.
Physical Networking (aka Underlay)
In my home lab, I have following L2 network layout ...
![]() |
Two NICs in LACP port-channel |
Storage (local ZFS)
To support Bhyve, the following storage disk layout was prepared and will be used in this setup ...
![]() |
1x USB 16 GB, 2x SAS 146 GB, 6x NL-SAS 500 GB, 2x NVMe |
Among the available mount points, the most important for Bhyve is the ZFS mount point /STORAGE-DATA/bhyve-datastore ...
![]() |
1.75 TB ZFS Dataset for Bhyve |
Bhyve Installation
Install Bhyve packages
pkg install vm-bhyve bhyve-firmware
To enable vm-bhyve, we have to add the following lines to rc.conf
sysrc vm_enable="YES"
As we want to use a;ready prepared ZFS storage, we will specify following ZFS dataset.
sysrc vm_dir="zfs:STORAGE-DATA/bhyve-datastore"
Now, add the following line to the end of /boot/loader.conf ...
# needed for virtualization support
vmm_load=”YES”
This completes the installation phase. The following section focuses on Bhyve configuration.
Bhyve Configuration
Bhyve configuration is about initialization of Bhyve, configuration of Virtual Networking, and making configuration persistent across reboots.
Bhyve Initialization
To initialize Bhyve run following command ...
vm init
What it does?
This should be run once after each host reboot before running any other vm commands. The main function of the init command is as follows:
- Load all necessary kernel modules if not already loaded
- Set tap devices to come up automatically when opened
- Create any configured virtual switches
You only need to run this command once after you've configured vm_enable and vm_dir in /etc/rc.conf. It's a foundational step that gets your Bhyve environment ready for creating and managing virtual machines.
Virtual Networking
Now we will create a new virtual switch named vSwitch0. A virtual switch in vm-bhyve is essentially a bridge interface on the FreeBSD host system. It acts like a network switch, allowing multiple virtual network interfaces to connect to it and communicate with each other. This switch is what enables network connectivity for your VMs.
vm switch create vSwitch0
We can list all virtual switches in our system ...
root@bhyve01:~ #
vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS vSwitch0 standard vm-vSwitch0 - no - - - root@bhyve01:~ #
We see that MTU and VLAN is not set. Let's configure it to fully integrate it with my homelab networking.
MTU is set by using ifconfig on interface created for each Bhyve virtual switch. Interface name is composed as prefix (vm), dash (-), and virtual switch name. In my case, it is vm-vSwitch0.
ifconfig vm-vSwitch0 mtu 9000
VLAN is set by vm switch command.
vm switch vlan vSwitch0 8
We can double-check the vSwitch0 settings.
root@bhyve01:~ #
vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS vSwitch0 standard vm-vSwitch0 - no - 8 - root@bhyve01:~ #
vm switch info vSwitch0
------------------------ Virtual Switch: vSwitch0 ------------------------ type: standard ident: vm-vSwitch0 vlan: 8 physical-ports: - bytes-in: 0 (0.000B) bytes-out: 0 (0.000B) root@bhyve01:~ #
We can see that VLAN is set, however, MTU is not visible in vSwich0 config.
MTU can be double-checked by following command ...
root@bhyve01:~ #
ifconfig vm-vSwitch0
vm-vSwitch0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 9000 options=0 ether 76:43:b1:28:4b:3a id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200 root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 groups: bridge vm-switch viid-fb9d1@ nd6 options=9<PERFORMNUD,IFDISABLED> root@bhyve01:~ #
The last think is to connect vSwitch0 to the physical network via FreeBSD/Bhyve uplink lagg0. It is done by following command.
vm switch add vSwitch0 lagg0
The final configuration can be verified ...
root@bhyve01:~ #
vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS vSwitch0 standard vm-vSwitch0 - no - 8 lagg0 root@bhyve01:~ #
vm switch info vSwitch0
------------------------ Virtual Switch: vSwitch0 ------------------------ type: standard ident: vm-vSwitch0 vlan: 8 physical-ports: lagg0 bytes-in: 0 (0.000B) bytes-out: 0 (0.000B) root@bhyve01:~ #
Make Bhyve configuration persistent
It is simple, right? However, this configuration is not stored on /etc/rc.conf, therefore we would lost it after the reboot. Let's put it into /etc/rc.conf.
vm_vSwitch0_type="standard"
vm_vSwitch0_ident="vm-vSwitch0"
vm_vSwitch0_phys="lagg0"
The vm_vSwitch0_flags entry passes the -n 20 flag to the vm switch create command at boot, which sets the VLAN ID to 20. This is the recommended and most common way to handle this.
There is no MTU setting stored in /etc/rc.conf, right? It is worth to say that MTU is set on the physical interface (lagg0). When a physical interface with a high MTU (e.g., lagg0 mtu 9000) is added to a bridge, the bridge automatically inherits that MTU. Make suret the MTU for the physical interface in /etc/rc.conf is set to MTU 9000.
That’s all for the Hypervisor and Virtual Networking configuration. In the next section, we will create our first VM (Virtual Machine).
Creating a Windows Server 2025 VM using vm-bhyve
Let's create Windows Server 2025. First of all, we need to put Windows ISO file somewhere into location accessible by Bhyve.
root@bhyve01:/STORAGE-DATA/bhyve-datastore # ls -la
total 3
drwxr-xr-x 6 root wheel 6 Sep 21 07:23 .
drwxr-xr-x 3 root wheel 3 Aug 17 11:56 ..
drwxr-xr-x 2 root wheel 4 Sep 21 04:33 .config
drwxr-xr-x 2 root wheel 2 Sep 21 04:33 .img
drwxr-xr-x 2 root wheel 2 Sep 21 04:33 .iso
drwxr-xr-x 2 root wheel 3 Sep 21 04:41 .templates
root@bhyve01:/STORAGE-DATA/bhyve-datastore #
As our vm_dir is "zfs:STORAGE-DATA/bhyve-datastore", there is subdirectory .iso, where ISO files should be stored, therefore I have copied Windows and FreeBSD ISOs into this directory.
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.iso # pwd
/STORAGE-DATA/bhyve-datastore/.iso
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.iso # ls -lah
total 6721982
drwxr-xr-x 2 root wheel 4B Sep 21 07:36 .
drwxr-xr-x 6 root wheel 6B Sep 21 07:23 ..
-rw-r--r-- 1 dpasek dpasek 1.2G Sep 21 07:35 FreeBSD-14.3-RELEASE-amd64-disc1.iso
-rw-r--r-- 1 dpasek dpasek 5.6G Sep 21 07:34 WinSrv2025.iso
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.iso #
Once we have ISO WinSrv2025.iso, we’ll need to download an ISO containing the latest stable virtio drivers for Windows. I fetch ISO directly into .iso directory.
root@bhyve01:~ # cd /STORAGE-DATA/bhyve-datastore/.iso/
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.iso # ls -la
total 9891198
drwxr-xr-x 2 root wheel 6 Sep 21 07:56 .
drwxr-xr-x 7 root wheel 7 Sep 21 09:28 ..
-rw-r--r-- 1 dpasek dpasek 1302714368 Sep 21 07:35 FreeBSD-14.3-RELEASE-amd64-disc1.iso
-rw-r--r-- 1 dpasek dpasek 3555434496 Sep 21 07:56 GhostBSD-25.02-R14.3p2-GERSHWIN.iso
-rw-r--r-- 1 dpasek dpasek 6014152704 Sep 21 07:34 WinSrv2025.iso
-rw-r--r-- 1 root wheel 789645312 Sep 12 01:17 virtio-win.iso
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.iso #
Now that we’ve got both our Windows Server 2025 ISO (WinSrv2025.iso) and our virtio driver ISO (virtio-win.iso), it’s time to create the guest.
VM Hardware Templates are stored in directory /STORAGE-DATA/bhyve-datastore/.templates
There is only one template called default.
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.templates # ls -la
total 3
drwxr-xr-x 2 root wheel 3 Sep 21 04:41 .
drwxr-xr-x 6 root wheel 6 Sep 21 07:23 ..
-rw-r--r-- 1 root wheel 136 Sep 21 04:41 default.conf
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.templates # cat default.conf
loader="bhyveload"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
root@bhyve01:/STORAGE-DATA/bhyve-datastore/.templates #
Let's create our own template uefi.conf ...
# no matter what OS you're installing on the guest.
loader="uefi"
graphics="yes"
xhci_mouse="yes"
# If not specified, cpu=n will give the guest n discrete CPU sockets.
# This is generally OK for Linux or BSD guests, but Windows throws a fit
# due to licensing issues, so we specify CPU topology manually here.
cpu=4
cpu_sockets=1
cpu_cores=4
# Remember, a guest doesn’t need extra RAM for filesystem caching--
# the host handles that for it. 4G is ludicrously low for Windows on hardware,
# but it’s generally more than sufficient for a guest. We will use 8G.
memory=8G
# put up to 8 disks on a single ahci controller. This avoids the creation of
# a new “controller” on a new “PCIe slot” for each drive added to the guest.
ahci_device_limit="8"
# e1000 works out-of-the-box, but virtio-net performs better. Virtio support
# is built in on FreeBSD and Linux guests, but Windows guests will need
# to have virtio drivers manually installed.
#network0_type="e1000"
network0_type="virtio-net"
network0_switch="vSwitch0"
# bhyve/nvme storage is considerably faster than bhyve/virtio-blk
# storage in my testing, on Windows, Linux, and FreeBSD guests alike.
disk0_type="nvme"
disk0_name="disk0.img"
# This gives the guest a virtual "optical" drive. Specifying disk1_dev=”custom”
# allows us to provide a full path to the ISO.
disk1_type="ahci-cd"
disk1_dev="custom"
disk1_name="/STORAGE-DATA/bhyve-datastore/.iso/virtio-win.iso"
# windows expects the host to expose localtime by default, not UTC
utctime="no"
This newly created template will serve us well for guests running FreeBSD, Linux, or Windows. Now that we’ve got a nice clean template to use, let’s create our first guest.
vm create -t uefi -s 100G windows2025
We can double-check if VM was created by listing all VMs ...
root@bhyve01:~ #
vm list
NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE windows2025 default uefi 4 8G - No Stopped root@bhyve01:~ #
The command vm config windows2025 brings the configs up in our system default text editor, which is vi. This is the way how to change VM configuration after deployment from our template uefi backed by file /STORAGE-DATA/bhyve-datastore/.templates/uefi.conf We do not need to change anything for our test Windows VM.
Now that our windows2025 guest’s hardware configuration is the way we want it, it’s time to actually install Windows on it ...
vm install windows2025 /STORAGE-DATA/bhyve-datastore/.iso/WinSrv2025.iso
root@bhyve01:~ #
vm install windows2025 /STORAGE-DATA/bhyve-datastore/.iso/WinSrv2025.iso
Starting windows2025 * found guest in /STORAGE-DATA/bhyve-datastore/windows2025 * booting... root@bhyve01:~ #
... and double-check what is current state of VM ...
root@bhyve01:/STORAGE-DATA/bhyve-datastore/windows2025 #
vm list
NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE windows2025 default uefi 4 8G 0.0.0.0:5900 No Locked (bhyve01.home.uw.cz) root@bhyve01:/STORAGE-DATA/bhyve-datastore/windows2025 #
I have Tiger VNC Viewer installed on another machine which has network access to address bhyve01.home.uw.cz on port 5900, therefore it is pretty simple to access VM console. It is depicted in two screenshots below.
![]() |
Using VNC Viewer to Access the VM Console |
![]() |
Windows Server 2025 Installation in a Bhyve VM Console |
One useful trick for a successful Windows installation is pressing CTRL+ALT+DEL. To press CTRL+ALT+DEL in TigerVNC Viewer, you use the popup menu. This menu is designed to handle special key combinations that your local operating system would normally intercept.
- Press the F8 key on your keyboard to bring up the popup menu.
- Within the popup menu, you will see an option to Send Ctrl-Alt-Del. Click on it.
Some viewers may also allow you to "lock" or "hold" the CTRL and ALT keys from this menu, after which you would simply press the DEL key. However, the most reliable and direct method is to use the dedicated menu option.
Virtio Drivers
We use Virtio NIC (network0_type="virtio-net") which is paravirtualized device requiring drivers not included in Microsoft Windows Operating System. That's the reason why we have to install Virtio Drivers., otherwise our virtual NIC does not work and our Windows VM does not have network connectivity. That's where virtio-win.iso come in to play. Virtio Drivers installation wizard is depicted on screenshot below.
![]() |
Virtio Drivers Installation Wizard for Microsoft Windows |
Bhyve VM with Windows 2025 is created and in the future I'll write follow up blog posts about some Guest OS specific details like what see Windows Device Manager, storage performance benchmark, network performance benchmark, etc.
Automatically starting guests on boot
Another obvious requirement is to start some VMs automatically when the host system boots. It is easy and very logical. You just need to add a couple of stanzas to /etc/rc.conf. In the following example, we auto start three VMs:
# start the following vms automatically, at vm_delay second intervals
vm_list="Router01 LinuxDockerHost Windows2025"
vm_delay="15"
Conclusion
Bhyve is a powerful hypervisor that builds on the strong reputation of the FreeBSD operating system, making FreeBSD an excellent choice for a server virtualization host.
Stay tuned for upcoming blog posts, where I will cover Bhyve both as a standalone virtualization host and as part of a High Availability cluster for production workloads that require resilience at the virtualization layer.
References
[1] From 0 to Bhyve on FreeBSD 13.1 : https://klarasystems.com/articles/from-0-to-bhyve-on-freebsd-13-1/
No comments:
Post a Comment