diff options
author | Eric Dumazet <edumazet@google.com> | 2015-07-31 18:25:17 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-08-03 14:22:53 -0700 |
commit | 0fbd050a7d262b74527a289ae75a33626d1060a8 (patch) | |
tree | ea6dd1fd564cdf64a9f4667e7be64c5cfbbbcc72 /drivers | |
parent | 95b9be64d1683c4552ab94d94c08bf833561b1c4 (diff) |
virtio_net: add gro capability
Straightforward patch to add GRO processing to virtio_net.
napi_complete_done() usage allows more aggressive aggregation,
opted-in by setting /sys/class/net/xxx/gro_flush_timeout
Tested:
Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
Rick Jones reported following results.
One VM of each on a pair of OpenStack compute nodes with E5-2650Lv3 CPUs
and Intel 82599ES-based NICs. So, two "before" and two "after" VMs.
The OpenStack compute nodes were running OpenStack Kilo, with VxLAN
encapsulation being used through OVS so no GRO coming-up the host
stack. The compute nodes themselves were running a 3.14-based kernel.
Single-stream netperf, CPU utilizations and thus service demands are
based on intra-guest reported CPU.
Throughput Mbit/s, bigger is better
Min Median Average Max
4.2.0-rc3+ 1364 1686 1678 1938
4.2.0-rc3+flush1k 1824 2269 2275 2647
Send Service Demand, smaller is better
Min Median Average Max
4.2.0-rc3+ 0.236 0.558 0.524 0.802
4.2.0-rc3+flush1k 0.176 0.503 0.471 0.738
Receive Service Demand, smaller is better.
Min Median Average Max
4.2.0-rc3+ 1.906 2.188 2.191 2.531
4.2.0-rc3+flush1k 0.448 0.529 0.533 0.692
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Rick Jones <rick.jones2@hp.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/net/virtio_net.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7fbca37a1adf..66f08f622dc6 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -518,7 +518,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, skb_mark_napi_id(skb, &rq->napi); - netif_receive_skb(skb); + napi_gro_receive(&rq->napi, skb); return; frame_err: @@ -756,7 +756,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget) /* Out of packets? */ if (received < budget) { r = virtqueue_enable_cb_prepare(rq->vq); - napi_complete(napi); + napi_complete_done(napi, received); if (unlikely(virtqueue_poll(rq->vq, r)) && napi_schedule_prep(napi)) { virtqueue_disable_cb(rq->vq); |