diff options
author | Mike Marciniszyn <mike.marciniszyn@intel.com> | 2017-03-20 17:26:01 -0700 |
---|---|---|
committer | Doug Ledford <dledford@redhat.com> | 2017-04-05 14:45:09 -0400 |
commit | 44dcfa4b182dffed0e1e6535220fcdd12620b9ec (patch) | |
tree | a1aa114b8f797893e28c4fd5d6b4f872599fc571 /include/rdma/rdmavt_qp.h | |
parent | 5f14e4e6674002f377ba164114eb9a67ba23d864 (diff) |
IB/rdmavt: Avoid reseting wqe send_flags in unreserve
The wqe should be read only and in fact the superfluous reset of the
RVT_SEND_RESERVE_USED flag causes an issue where reserved operations
elicit a bad completion to the ULP.
The maintenance of the flag is now entirely within rvt_post_one_wr()
where a reserved operation will set the flag and a non-reserved operation
will insure the operation that is about to be posted has the flag reset.
Fixes: Commit 856cc4c237ad ("IB/hfi1: Add the capability for reserved operations")
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Diffstat (limited to 'include/rdma/rdmavt_qp.h')
-rw-r--r-- | include/rdma/rdmavt_qp.h | 4 |
1 files changed, 1 insertions, 3 deletions
diff --git a/include/rdma/rdmavt_qp.h b/include/rdma/rdmavt_qp.h index 3cdd9e26e96e..e3bb312a2ffa 100644 --- a/include/rdma/rdmavt_qp.h +++ b/include/rdma/rdmavt_qp.h @@ -2,7 +2,7 @@ #define DEF_RDMAVT_INCQP_H /* - * Copyright(c) 2016 Intel Corporation. + * Copyright(c) 2016, 2017 Intel Corporation. * * This file is provided under a dual BSD/GPLv2 license. When using or * redistributing this file, you may do so under either license. @@ -526,7 +526,6 @@ static inline void rvt_qp_wqe_reserve( struct rvt_qp *qp, struct rvt_swqe *wqe) { - wqe->wr.send_flags |= RVT_SEND_RESERVE_USED; atomic_inc(&qp->s_reserved_used); } @@ -550,7 +549,6 @@ static inline void rvt_qp_wqe_unreserve( struct rvt_swqe *wqe) { if (unlikely(wqe->wr.send_flags & RVT_SEND_RESERVE_USED)) { - wqe->wr.send_flags &= ~RVT_SEND_RESERVE_USED; atomic_dec(&qp->s_reserved_used); /* insure no compiler re-order up to s_last change */ smp_mb__after_atomic(); |