Messages in this thread Patch in this message | | | Date | Tue, 16 Aug 2022 11:35:11 +0100 | From | John Garry <> | Subject | Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression |
| |
On 16/08/2022 07:57, Oliver Sang wrote: >>> For me, a complete kernel log may help. >> and since only 1HDD, the output of the following would be helpful: >> >> /sys/block/sda/queue/max_sectors_kb >> /sys/block/sda/queue/max_hw_sectors_kb >> >> And for 5.19, if possible. > for commit > 0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors") > > root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb > 512 > root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb > 512 > > for both commit > 4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit") > and v5.19 > > root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb > 1280 > root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb > 32767 >
thanks, I appreciate this.
From the dmesg, I see 2x SATA disks - I was under the impression that the system only has 1x.
Anyway, both drives show LBA48, which means the large max hw sectors at 32767KB: [ 31.129629][ T1146] ata6.00: 1562824368 sectors, multi 1: LBA48 NCQ (depth 32)
So this is what I suspected: we are capped from the default shost max sectors (1024 sectors).
This seems like the simplest fix for you:
--- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -1382,7 +1382,8 @@ extern const struct attribute_group *ata_common_sdev_groups[]; .proc_name = drv_name, \ .slave_destroy = ata_scsi_slave_destroy, \ .bios_param = ata_std_bios_param, \ - .unlock_native_capacity = ata_scsi_unlock_native_capacity + .unlock_native_capacity = ata_scsi_unlock_native_capacity,\ + .max_sectors = ATA_MAX_SECTORS_LBA48
A concern is that other drivers which use libata may have similar issues, as they use default in SCSI_DEFAULT_MAX_SECTORS for max_sectors: hisi_sas pm8001 aic9xxx mvsas isci
So they may be needlessly hobbled for some SATA disks. However I have a system with hisi_sas controller and attached LBA48 disk. I tested performance for v5.19 vs 6.0 and it was about the same for fio rw=read @ ~120K IOPS. I can test this further. Thanks, John
| |