diff options
author | James Smart <jsmart2021@gmail.com> | 2019-01-28 11:14:28 -0800 |
---|---|---|
committer | Martin K. Petersen <martin.petersen@oracle.com> | 2019-02-05 22:29:09 -0500 |
commit | c490850a094794e7515737a6939146966c826577 (patch) | |
tree | 7501cd7a6d7150695d0dd1914a8c72ce843de194 /drivers/scsi/lpfc/lpfc_nvme.h | |
parent | ace44e48b1fc36055a8ff05a778110e726387032 (diff) |
scsi: lpfc: Adapt partitioned XRI lists to efficient sharing
The XRI get/put lists were partitioned per hardware queue. However, the
adapter rarely had sufficient resources to give a large number of resources
per queue. As such, it became common for a cpu to encounter a lack of XRI
resource and request the upper io stack to retry after returning a BUSY
condition. This occurred even though other cpus were idle and not using
their resources.
Create as efficient a scheme as possible to move resources to the cpus that
need them. Each cpu maintains a small private pool which it allocates from
for io. There is a watermark that the cpu attempts to keep in the private
pool. The private pool, when empty, pulls from a global pool from the
cpu. When the cpu's global pool is empty it will pull from other cpu's
global pool. As there many cpu global pools (1 per cpu or hardware queue
count) and as each cpu selects what cpu to pull from at different rates and
at different times, it creates a radomizing effect that minimizes the
number of cpu's that will contend with each other when the steal XRI's from
another cpu's global pool.
On io completion, a cpu will push the XRI back on to its private pool. A
watermark level is maintained for the private pool such that when it is
exceeded it will move XRI's to the CPU global pool so that other cpu's may
allocate them.
On NVME, as heartbeat commands are critical to get placed on the wire, a
single expedite pool is maintained. When a heartbeat is to be sent, it will
allocate an XRI from the expedite pool rather than the normal cpu
private/global pools. On any io completion, if a reduction in the expedite
pools is seen, it will be replenished before the XRI is placed on the cpu
private pool.
Statistics are added to aid understanding the XRI levels on each cpu and
their behaviors.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'drivers/scsi/lpfc/lpfc_nvme.h')
-rw-r--r-- | drivers/scsi/lpfc/lpfc_nvme.h | 45 |
1 files changed, 1 insertions, 44 deletions
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h index 13f900dd4eeb..07d5fa16dd39 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.h +++ b/drivers/scsi/lpfc/lpfc_nvme.h @@ -71,49 +71,6 @@ struct lpfc_nvme_rport { struct completion rport_unreg_done; }; -struct lpfc_nvme_buf { - /* Common fields */ - struct list_head list; - void *data; - dma_addr_t dma_handle; - dma_addr_t dma_phys_sgl; - struct sli4_sge *dma_sgl; - struct lpfc_iocbq cur_iocbq; - struct lpfc_sli4_hdw_queue *hdwq; - uint16_t hdwq_no; - uint16_t cpu; - - /* NVME specific fields */ - struct nvmefc_fcp_req *nvmeCmd; - struct lpfc_nodelist *ndlp; - - uint32_t timeout; - - uint16_t flags; /* TBD convert exch_busy to flags */ -#define LPFC_SBUF_XBUSY 0x1 /* SLI4 hba reported XB on WCQE cmpl */ -#define LPFC_BUMP_QDEPTH 0x2 /* bumped queue depth counter */ - uint16_t exch_busy; /* SLI4 hba reported XB on complete WCQE */ - uint16_t status; /* From IOCB Word 7- ulpStatus */ - uint32_t result; /* From IOCB Word 4. */ - - uint32_t seg_cnt; /* Number of scatter-gather segments returned by - * dma_map_sg. The driver needs this for calls - * to dma_unmap_sg. - */ - wait_queue_head_t *waitq; - unsigned long start_time; - - uint16_t qidx; - -#ifdef CONFIG_SCSI_LPFC_DEBUG_FS - uint64_t ts_cmd_start; - uint64_t ts_last_cmd; - uint64_t ts_cmd_wqput; - uint64_t ts_isr_cmpl; - uint64_t ts_data_nvme; -#endif -}; - struct lpfc_nvme_fcpreq_priv { - struct lpfc_nvme_buf *nvme_buf; + struct lpfc_io_buf *nvme_buf; }; |