lkml.org 
[lkml]   [2018]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v3 2/2] iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible
    Date
    More than two CMD_SYNCs maybe adjacent in the command queue, and the first
    one has done what others want to do. Drop the redundant CMD_SYNCs can
    improve IO performance especially under the pressure scene.

    I did the statistics in my test environment, the number of CMD_SYNCs can
    be reduced about 1/3. See below:
    CMD_SYNCs reduced: 19542181
    CMD_SYNCs total: 58098548 (include reduced)
    CMDs total: 116197099 (TLBI:SYNC about 1:1)

    Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
    ---
    drivers/iommu/arm-smmu-v3.c | 22 +++++++++++++++++++---
    1 file changed, 19 insertions(+), 3 deletions(-)

    diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
    index 3f5c236..ee0219b 100644
    --- a/drivers/iommu/arm-smmu-v3.c
    +++ b/drivers/iommu/arm-smmu-v3.c
    @@ -567,6 +567,7 @@ struct arm_smmu_device {
    int gerr_irq;
    int combined_irq;
    u32 sync_nr;
    + u8 prev_cmd_opcode;

    unsigned long ias; /* IPA */
    unsigned long oas; /* PA */
    @@ -780,6 +781,11 @@ static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
    cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata);
    }

    +static inline u8 arm_smmu_cmd_opcode_get(u64 *cmd)
    +{
    + return cmd[0] & CMDQ_0_OP;
    +}
    +
    /* High-level queue accessors */
    static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
    {
    @@ -904,6 +910,8 @@ static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
    struct arm_smmu_queue *q = &smmu->cmdq.q;
    bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);

    + smmu->prev_cmd_opcode = arm_smmu_cmd_opcode_get(cmd);
    +
    while (queue_insert_raw(q, cmd) == -ENOSPC) {
    if (queue_poll_cons(q, false, wfe))
    dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
    @@ -958,9 +966,17 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
    arm_smmu_cmdq_build_cmd(cmd, &ent);

    spin_lock_irqsave(&smmu->cmdq.lock, flags);
    - ent.sync.msidata = ++smmu->sync_nr;
    - arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
    - arm_smmu_cmdq_insert_cmd(smmu, cmd);
    + if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
    + /*
    + * Previous command is CMD_SYNC also, there is no need to add
    + * one more. Just poll it.
    + */
    + ent.sync.msidata = smmu->sync_nr;
    + } else {
    + ent.sync.msidata = ++smmu->sync_nr;
    + arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
    + arm_smmu_cmdq_insert_cmd(smmu, cmd);
    + }
    spin_unlock_irqrestore(&smmu->cmdq.lock, flags);

    return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
    --
    1.8.3

    \
     
     \ /
      Last update: 2018-08-15 12:24    [W:2.177 / U:0.740 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site