lkml.org 
[lkml]   [2006]   [Feb]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch 0/5] lightweight robust futexes: -V1
    On Wed, Feb 15, 2006, Ingo Molnar wrote:
    > "Robustness" is about dealing with crashes while holding a lock: if a
    > process exits prematurely while holding a pthread_mutex_t lock that is
    > also shared with some other process (e.g. yum segfaults while holding a
    > pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need
    > to be notified that the last owner of the lock exited in some irregular
    > way.
    ...
    > At the heart of this new approach there is a per-thread private list of
    > robust locks that userspace is holding (maintained by glibc) - which
    > userspace list is registered with the kernel via a new syscall [this
    > registration happens at most once per thread lifetime]. At do_exit()
    > time, the kernel checks this user-space list: are there any robust futex
    > locks to be cleaned up?
    ...
    > i've tested the new syscalls on x86 and x86_64, and have made sure the
    > parsing of the userspace list is robust [ ;-) ] even if the list is
    > deliberately corrupted.

    I've no knowledge about all this, and maybe I didn't get your
    description, so forgive me if I'm talking garbage.

    Anyway: If a process can trash its robust futext list and then
    die with a segfault, why are the futexes still robust?
    In this case the kernel has no way to wake up waiters with
    FUTEX_OWNER_DEAD, or does it?


    Johannes
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-02-16 16:01    [W:0.038 / U:0.116 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site