lkml.org 
[lkml]   [2017]   [Jul]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection
From
Date
Hi Kees,


on 2017/7/25 2:35, Kees Cook wrote:
> +static __always_inline __must_check
> +int __refcount_add_unless(refcount_t *r, int a, int u)
> +{
> + int c, new;
> +
> + c = atomic_read(&(r->refs));
> + do {
> + if (unlikely(c == u))
> + break;
> +
> + asm volatile("addl %2,%0\n\t"
> + REFCOUNT_CHECK_LT_ZERO
> + : "=r" (new)
> + : "0" (c), "ir" (a),
> + [counter] "m" (r->refs.counter)
> + : "cc", "cx");
here when the result LT_ZERO, you will saturate the r->refs.counter and
make the

atomic_try_cmpxchg(&(r->refs), &c, new) bound to fail first time.

maybe we can just saturate the value of variable "new" ?



> +
> + } while (!atomic_try_cmpxchg(&(r->refs), &c, new));
> +
> + return c;
> +}
> +

--
Best Regards
Li Kun

\
 
 \ /
  Last update: 2017-07-25 14:05    [W:0.084 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site