05-22-2019 07:51 AM
When using Windows Server 2019 as Hyper-V-Host-Server with a Intel X722 (1GBit/s) you have a very bad network performance at the Guests (VMs).
When you coping files you dont get more as 5-30 MBit/s.
Many other users have the same issue with Intel X722 (more cases at a german IT Board).
Has somebody a workaround?
Solved! Go to Solution.
05-22-2019 07:18 PM
I found someone had a similar issue from MS forum and he found a workaround 'disable RSC in the VM' to resolve the NIC performance issue. Please take a look and give it a shot.
As follows is the information from the post:
"Have set up a test environment to try it. It was 100% repeatable. A has done a lot of testing and searching for a solution.
After a while, I found folowing post: https://win10.guru/hyper-v-external-switches-killing-networking-in-insider-builds/ - the comments section RSC is mentioned. That led to the next one about Receive Segment Coalescing (RSC) in the vSwitch: https://docs.microsoft.com/sl-si/windows-server/networking/technologies/hpn/rsc-in-the-vswitch?view=...
The logical step was disabling RSC in the VM - That solved the problem. Application from shared folder opens normally fast (as on Windows 2016 VM or Windows 10 1809 VM). "
05-27-2019 11:25 PM - edited 05-28-2019 12:32 AM
Thanks for your reply and sorry for the delay. I had to do several tests.
The issue only exists, if Host is Server 2019 and Guest is NOT Server 2019.
If Host and Guest are 2019 all works fine.
I've try the following with no success:
After this I made my tests with a low budget Realtek USB GbE Family NIC and now all works fine.
The result is the same as many other users had wrote in several boards.
But that can't be the solution for a brand new Lenovo Server...
05-28-2019 12:24 AM
Do you install the appropriate NIC driver for the guest OS?
If the driver was installed on the guest OS but still occurred the low-performance issue, please follow the procedure here to open a support ticket: How to open a support ticket for Lenovo Data Center products.
05-28-2019 01:12 AM
The NIC on the Guest OS is a "Microsoft Hyper V-Network adapter".
And as I understand, the last driver should be installed with windows update, or not?
Anyway I couldn't find drivers for the "Microsoft Hyper V-Network adapter" on a serious website.
And all Windows Updates are installed.
So I have to open a support ticket?
05-28-2019 08:13 PM
Your understanding is correct, the latest driver of the guest OS should be installed with windows update.
Do you try to edit properties settings for the MS Hyper-V Network adapter?
Please try editing the following settings to see if it's helpful.
Go to Device Manager > Network Adapter and changed the following "Advanced" settings for the Hyper-V Network adapter.
1. Jumbo Packet -> set to Jumno 9000.
2. Large Send Offload V2 (IPV4) -> set to Disabled.
3. Large Send Offload V2 (IPV6) -> set to Disabled.
4. Spped & Duplex -> set to 10Gbps Full Duplex.
05-28-2019 10:12 PM
"Speed & Duplex" isn't available in my settings. Also consider: My X722 is only 1Gbps, not 10 Gbps!
I've set Jumbo Packet und Large Send Offload. No improvement.
Additionally I also set these settings at the Hardware NIC and the virtual Switch. No improvement.
Btw: In the meantime I've created a support ticket. Maybe they have a solution.
05-30-2019 11:54 AM
I thought I'd chime in since I have been fighting a similar issue for weeks now.
ThinkSystem SR570 w/ X722 10GbE NICs (and 1 GbE Nics)
Windows Server 2019 - running on the physical box - no VMs.
I have had terrible TCP performance across a high-latency high-bandwidth link. UDP was ok. Using iperf3 to troubleshoot (with a TCP window size of 5M for this particular latency/BW combo on this link)
Routers for the WAN link have traffic policies shaping throughput for the 1 Gbps link. Shaper is there because initially we had issues with the 10 GbE traffic going down to the 1 GbE WAN link. After the shapers were put in place, the other systems are able to utilize the 1 GbE throughput, but the new 2019 servers had terrible performance.
After trying different drivers, firmware, etc. the only solution I finally found was this:
In the Driver -> Advanced for the NIC:
Recv Segment Coalescing (IPv4) - set to Disabled
Recv Segment Coalescing (IPv6) - set to Disabled
After this I also had to create a New-NetTransportFilter (powershell) to create a mapping for my destination prefix to the Datacenter profile. This is more due to the latency on the high bandwidth circuit.
I hope this helps someone else out!