lkml.org 
[lkml]   [2018]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 07/15] x86/fsgsbase/64: putregs() in a reverse order
Date
This patch makes a walk of user_regs_struct reversely. Main
reason for doing this is to put FS/GS base setting after
the selector.

Each element is independently set now. When FS/GS base is
(only) updated, its index is reset to zero. In putregs(),
it does not reset when both FS/GS base and selector are
covered.

When FSGSBASE is enabled, an arbitrary base value is possible
anyways, so it is going to be reasonable to write base lastly.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: Markus T. Metzger <markus.t.metzgar@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
---
arch/x86/kernel/ptrace.c | 48 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 45 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 9c09bf0..ee37e28 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -426,14 +426,56 @@ static int putregs(struct task_struct *child,
unsigned int count,
const unsigned long *values)
{
- const unsigned long *v = values;
+ const unsigned long *v = values + count / sizeof(unsigned long);
int ret = 0;
+#ifdef CONFIG_X86_64
+ bool fs_fully_covered = (offset <= USER_REGS_OFFSET(fs_base)) &&
+ ((offset + count) >= USER_REGS_OFFSET(fs));
+ bool gs_fully_covered = (offset <= USER_REGS_OFFSET(gs_base)) &&
+ ((offset + count) >= USER_REGS_OFFSET(gs));
+
+ offset += count - sizeof(*v);
+
+ while (count >= sizeof(*v) && !ret) {
+ v--;
+ switch (offset) {
+ case USER_REGS_OFFSET(fs_base):
+ if (fs_fully_covered) {
+ if (unlikely(*v >= TASK_SIZE_MAX))
+ return -EIO;
+ /*
+ * When changing both %fs (index) and %fsbase
+ * write_task_fsbase() tends to overwrite
+ * task's %fs. Simply setting base only here.
+ */
+ if (child->thread.fsbase != *v)
+ child->thread.fsbase = *v;
+ break;
+ }
+ case USER_REGS_OFFSET(gs_base):
+ if (gs_fully_covered) {
+ if (unlikely(*v >= TASK_SIZE_MAX))
+ return -EIO;
+ /* Same here as the %fs handling above */
+ if (child->thread.gsbase != *v)
+ child->thread.gsbase = *v;
+ break;
+ }
+ default:
+ ret = putreg(child, offset, *v);
+ }
+ count -= sizeof(*v);
+ offset -= sizeof(*v);
+ }
+#else

+ offset += count - sizeof(*v);
while (count >= sizeof(*v) && !ret) {
- ret = putreg(child, offset, *v++);
+ ret = putreg(child, offset, *(--v));
count -= sizeof(*v);
- offset += sizeof(*v);
+ offset -= sizeof(*v);
}
+#endif /* CONFIG_X86_64 */
return ret;
}

--
2.7.4
\
 
 \ /
  Last update: 2018-03-19 19:09    [W:0.244 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site