14

I made a following setup to compare a performance of virtio-pci and e1000 drivers:

virtio test-setup

I expected to see much higher throughput in case of virtio-pci compared to e1000, but they performed identically.

Test with virtio-pci(192.168.0.126 is configured to T60 and 192.168.0.129 is configured to PC1):

root@PC1:~# grep hype /proc/cpuinfo
flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid
root@PC1:~# lspci -s 00:03.0 -v
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
    Subsystem: Red Hat, Inc Device 0001
    Physical Slot: 3
    Flags: bus master, fast devsel, latency 0, IRQ 11
    I/O ports at c000 [size=32]
    Memory at febd1000 (32-bit, non-prefetchable) [size=4K]
    Expansion ROM at feb80000 [disabled] [size=256K]
    Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
    Kernel driver in use: virtio-pci

root@PC1:~# iperf -c 192.168.0.126 -d -t 30 -l 64
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.126, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.129 port 41573 connected with 192.168.0.126 port 5001
[  5] local 192.168.0.129 port 5001 connected with 192.168.0.126 port 44480
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec   126 MBytes  35.4 Mbits/sec
[  5]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
root@PC1:~# 

Test with e1000(192.168.0.126 is configured to T60 and 192.168.0.129 is configured to PC1):

root@PC1:~# grep hype /proc/cpuinfo
flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid
root@PC1:~# lspci -s 00:03.0 -v
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
    Subsystem: Red Hat, Inc QEMU Virtual Machine
    Physical Slot: 3
    Flags: bus master, fast devsel, latency 0, IRQ 11
    Memory at febc0000 (32-bit, non-prefetchable) [size=128K]
    I/O ports at c000 [size=64]
    Expansion ROM at feb80000 [disabled] [size=256K]
    Kernel driver in use: e1000

root@PC1:~# iperf -c 192.168.0.126 -d -t 30 -l 64
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.126, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.129 port 42200 connected with 192.168.0.126 port 5001
[  5] local 192.168.0.129 port 5001 connected with 192.168.0.126 port 44481
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
[  5]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
root@PC1:~# 

With large packets the bandwidth was ~900Mbps in case of both drivers.

When does the theoretical higher performance of virtio-pci comes into play? Why did I see equal performance with e1000 and virtio-pci?

Nicryc
  • 285
  • 4
  • 11
Martin
  • 7,284
  • 40
  • 125
  • 208
  • 2
    Could you watch the CPU usage of the host and vm while doing this benchmark? Maybe non-virtio does need more CPU but your one is fast enough. – rudimeier Oct 13 '16 at 15:06
  • I did watch the CPU usage and for both drivers it was pretty much the same. I executed `iperf -c 192.168.0.126 -d -t 300 -l 64; uptime` with both drivers and in case of `e1000` the results were `load average: 0.04, 0.07, 0.05` and in case of `virtio-pci` they were `load average: 0.23, 0.11, 0.05`. CPU usage on host machine was also basically the same(I checked this with `top`). – Martin Oct 13 '16 at 18:39
  • Number of connections, number of IPs? – phk Oct 24 '16 at 22:28
  • Another guess. Maybe virtio is able to pass through "hardware features" of the host's NIC.(like segmentation and checksum Offloading) *if the host NIC does supprt them*. In other words if the host NIC does not have whatever advanced features then virto cannot be better than e1000. – rudimeier Oct 26 '16 at 10:03
  • @rudimeier If this was the case this would only apply to traffic coming through these host's nics I guess. – phk Oct 27 '16 at 21:09
  • Can you please post your qemu config? – sean Aug 06 '18 at 12:38
  • 2
    See here for another comparison: https://www.linux-kvm.org/page/Using_VirtIO_NIC – ceving Sep 24 '19 at 13:20
  • Did you installed the kvm kernel module system or a bare qemu? – jrglndmnn Jan 14 '21 at 06:58

1 Answers1

1

You did perform a bandwidth test, that does not stress PCI.

You need to simulate an environment with many concurrent sessions. There you should see a difference.

Perhaps -P 400 might simulate that kind of test using iperf.

Nils
  • 18,202
  • 11
  • 46
  • 82