About multi-queue functionality
Use multi-queue functionality to scale network throughput and performance on virtual machines (VMs) with multiple vCPUs.
By default, the queueCount value, which is derived from the domain XML, is determined by the number of vCPUs allocated to a VM. Network performance does not scale as the number of vCPUs increases. Additionally, because virtio-net has only one Tx and Rx queue, guests cannot transmit or retrieve packs in parallel.
Note
Enabling virtio-net multiqueue does not offer significant improvements when the number of vNICs in a guest instance is proportional to the number of vCPUs.
Known limitations
-
MSI vectors are still consumed if virtio-net multiqueue is enabled in the host but not enabled in the guest operating system by the administrator.
-
Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver.
-
Starting a VM with more than 16 CPUs results in no connectivity if
networkInterfaceMultiqueueis set to 'true' (CNV-16107).
Enabling multi-queue functionality
You can enable multi-queue functionality for interfaces configured with a VirtIO model.
-
Set the
networkInterfaceMultiqueuevalue totruein theVirtualMachinemanifest file of your VM to enable multi-queue functionality:apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true -
Save the
VirtualMachinemanifest file to apply your changes.