Messages in this thread Patch in this message | | | Date | Fri, 10 Sep 1999 01:17:25 +0200 (CEST) | From | Andrea Arcangeli <> | Subject | Re: [patch] longstanding chksum patch |
| |
On Fri, 10 Sep 1999, Artur Skawina wrote:
>"minor performance improvement" is apparently a relative term - >IIRC your patch slows the checksum by 44% for the perfectly legal >(even if uncommon) case of 2-byte aligned buffer...
That's not the same result about the test I did. Trusting you I guess the benchmark I was using is completly broken (I was using an hacked checksum-helper from Dave). Also my numbers are posted on l-k.
Anyway right now I am not concerned about the performance change at all (I give higher priority to bugs). Since I remember somebody agreed with the enlarging of the unrolled loop I and I had positive numbers I posted it again and I didn't expected to get complains about it.
So the below _new_ patch has no performances objects at all. I am only fixing the buffer overflow with it. Please complain about this patch.
--- 2.3.17/arch/i386/lib/checksum.S Tue Jul 13 02:00:55 1999 +++ /tmp/checksum.S Fri Sep 10 01:05:21 1999 @@ -123,9 +123,6 @@ movl 16(%esp),%ecx # Function arg: int len movl 12(%esp),%esi # Function arg: const unsigned char *buf - testl $2, %esi - jnz 30f -10: movl %ecx, %edx movl %ecx, %ebx andl $0x7c, %ebx @@ -137,24 +134,6 @@ testl %esi, %esi jmp *%ebx - # Handle 2-byte-aligned regions -20: addw (%esi), %ax - lea 2(%esi), %esi - adcl $0, %eax - jmp 10b - -30: subl $2, %ecx - ja 20b - je 32f - movzbl (%esi),%ebx # csumming 1 byte, 2-aligned - addl %ebx, %eax - adcl $0, %eax - jmp 80f -32: - addw (%esi), %ax # csumming 2 bytes, 2-aligned - adcl $0, %eax - jmp 80f - 40: addl -128(%esi), %eax adcl -124(%esi), %eax @@ -193,18 +172,20 @@ adcl $0, %eax dec %ecx jge 40b - movl %edx, %ecx -50: andl $3, %ecx +50: + testl $3, %edx jz 80f - - # Handle the last 1-3 bytes without jumping - notl %ecx # 1->2, 2->1, 3->0, higher bits are masked - movl $0xffffff,%ebx # by the shll and shrl instructions - shll $3,%ecx - shrl %cl,%ebx - andl -128(%esi),%ebx # esi is 4-aligned so should be ok - addl %ebx,%eax + testl $2, %edx + jz 70f + addw -128(%esi), %ax + lea 2(%esi), %esi adcl $0,%eax +70: + testl $1, %edx + jz 80f + movzbl -128(%esi), %ebx + addl %ebx, %eax + adcl $0, %eax 80: popl %ebx popl %esi Andrea
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |