diff options
author | David Laight <David.Laight@ACULAB.COM> | 2013-11-11 12:26:54 +0000 |
---|---|---|
committer | Sarah Sharp <sarah.a.sharp@linux.intel.com> | 2013-12-02 11:57:10 -0800 |
commit | 35773dac5f862cb1c82ea151eba3e2f6de51ec3e (patch) | |
tree | ee95416c3efb1f75ab4bd43ed2560a23c97739d3 /drivers/usb | |
parent | c24cb6c8b501ebdf1aacec7960110a9741a45ced (diff) |
usb: xhci: Link TRB must not occur within a USB payload burst
Section 4.11.7.1 of rev 1.0 of the xhci specification states that a link TRB
can only occur at a boundary between underlying USB frames (512 bytes for
high speed devices).
If this isn't done the USB frames aren't formatted correctly and, for example,
the USB3 ethernet ax88179_178a card will stop sending (while still receiving)
when running a netperf tcp transmit test with (say) and 8k buffer.
This should be a candidate for stable, the ax88179_178a driver defaults to
gso and tso enabled so it passes a lot of fragmented skb to the USB stack.
Notes from Sarah:
Discussion: http://marc.info/?l=linux-usb&m=138384509604981&w=2
This patch fixes a long-standing xHCI driver bug that was revealed by a
change in 3.12 in the usb-net driver. Commit
638c5115a794981441246fa8fa5d95c1875af5ba "USBNET: support DMA SG" added
support to use bulk endpoint scatter-gather (urb->sg). Only the USB
ethernet drivers trigger this bug, because the mass storage driver sends
sg list entries in page-sized chunks.
This patch only fixes the issue for bulk endpoint scatter-gather. The
problem will still occur for periodic endpoints, because hosts will
interpret no-op transfers as a request to skip a service interval, which
is not what we want.
Luckily, the USB core isn't set up for scatter-gather on isochronous
endpoints, and no USB drivers use scatter-gather for interrupt
endpoints. Document this known limitation so that developers won't try
to use urb->sg for interrupt endpoints until this issue is fixed. The
more comprehensive fix would be to allow link TRBs in the middle of the
endpoint ring and revert this patch, but that fix would touch too much
code to be allowed in for stable.
This patch should be backported to kernels as old as 3.12, that contain
the commit 638c5115a794981441246fa8fa5d95c1875af5ba "USBNET: support DMA
SG". Without this patch, the USB network device gets wedged, and stops
sending packets. Mark Lord confirms this patch fixes the regression:
http://marc.info/?l=linux-netdev&m=138487107625966&w=2
Signed-off-by: David Laight <david.laight@aculab.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Tested-by: Mark Lord <mlord@pobox.com>
Cc: stable@vger.kernel.org
Diffstat (limited to 'drivers/usb')
-rw-r--r-- | drivers/usb/host/xhci-ring.c | 54 |
1 files changed, 52 insertions, 2 deletions
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 1e2f3f495843..53c2e296467f 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2973,8 +2973,58 @@ static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, } while (1) { - if (room_on_ring(xhci, ep_ring, num_trbs)) - break; + if (room_on_ring(xhci, ep_ring, num_trbs)) { + union xhci_trb *trb = ep_ring->enqueue; + unsigned int usable = ep_ring->enq_seg->trbs + + TRBS_PER_SEGMENT - 1 - trb; + u32 nop_cmd; + + /* + * Section 4.11.7.1 TD Fragments states that a link + * TRB must only occur at the boundary between + * data bursts (eg 512 bytes for 480M). + * While it is possible to split a large fragment + * we don't know the size yet. + * Simplest solution is to fill the trb before the + * LINK with nop commands. + */ + if (num_trbs == 1 || num_trbs <= usable || usable == 0) + break; + + if (ep_ring->type != TYPE_BULK) + /* + * While isoc transfers might have a buffer that + * crosses a 64k boundary it is unlikely. + * Since we can't add NOPs without generating + * gaps in the traffic just hope it never + * happens at the end of the ring. + * This could be fixed by writing a LINK TRB + * instead of the first NOP - however the + * TRB_TYPE_LINK_LE32() calls would all need + * changing to check the ring length. + */ + break; + + if (num_trbs >= TRBS_PER_SEGMENT) { + xhci_err(xhci, "Too many fragments %d, max %d\n", + num_trbs, TRBS_PER_SEGMENT - 1); + return -ENOMEM; + } + + nop_cmd = cpu_to_le32(TRB_TYPE(TRB_TR_NOOP) | + ep_ring->cycle_state); + ep_ring->num_trbs_free -= usable; + do { + trb->generic.field[0] = 0; + trb->generic.field[1] = 0; + trb->generic.field[2] = 0; + trb->generic.field[3] = nop_cmd; + trb++; + } while (--usable); + ep_ring->enqueue = trb; + if (room_on_ring(xhci, ep_ring, num_trbs)) + break; + } if (ep_ring == xhci->cmd_ring) { xhci_err(xhci, "Do not support expand command ring\n"); |