lkml.org 
[lkml]   [2015]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin()
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
> math_state_restore() can race with kernel_fpu_begin() if irq comes
> right after __thread_fpu_begin(), __save_init_fpu() will overwrite
> fpu->state we are going to restore.
>
> Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
> which simply set/clear in_kernel_fpu, and change
> math_state_restore() to exclude kernel_fpu_begin() in between.
>
> Alternatively we could use local_irq_save/restore, but probably
> these new helpers can have more users.
>
> Perhaps they should disable/enable preemption themselves, in this
> case we can remove preempt_disable() in __restore_xstate_sig().

Given that math_state_restore does an implicit preempt_disable
through local_irq_disable, I am not sure whether adding an
explicit preempt_disable would be good or bad.

It's not like the additional locking rule makes this code any
more complex.

> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Reviewed-by: Rik van Riel <riel@redhat.com>

- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUuHe8AAoJEM553pKExN6Ds4kH/2dIkmOlhUNF7npjpvRNy6As
a7/QVBJOvo2IOD5My4An2f/pdfNiJyC4dwIN8tM3JngA2LM57VFR5TzaODByq9TI
xxPKCm+SY6M3apCBx7CWyTEloEXYLjvxnVvNkbfkOhArrqJzJLGqDiV5nkMi13fs
96ibGr04vIYRJ6VJNOfmCq1psAO31Yy6ZKfAADbkiOn7VmZ/qZykyjylfeidNiyj
PTSAx9htvb39N2EMjYRnqhypZ90LMCffYg7YMT4Wdc9+BorMz3oiwzZZSjI/WcBS
Dr2rH80KNMQvSg2iYAtuWZB7BY4cnvhRqoFHqJsFQNzgVAksC0LYE+66bvQO0JQ=
=nxZE
-----END PGP SIGNATURE-----


\
 
 \ /
  Last update: 2015-01-16 03:41    [W:0.213 / U:0.744 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site