lkml.org 
[lkml]   [2017]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip:x86/fpu] x86/fpu: Clean up the parameter definitions of copy_xstate_to_*()
Commit-ID:  becb2bb72ff906cc0d2bac3ee9574f694364823b
Gitweb: http://git.kernel.org/tip/becb2bb72ff906cc0d2bac3ee9574f694364823b
Author: Ingo Molnar <mingo@kernel.org>
AuthorDate: Sat, 23 Sep 2017 14:59:49 +0200
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 24 Sep 2017 13:04:31 +0200

x86/fpu: Clean up the parameter definitions of copy_xstate_to_*()

Remove pointless 'const' of non-pointer input parameter.

Remove unnecessary parenthesis that shows uncertainty about arithmetic operator precedence.

Clarify copy_xstate_to_user() description.

No change in functionality.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
Link: http://lkml.kernel.org/r/20170923130016.21448-7-mingo@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/kernel/fpu/xstate.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 0a29946..9647e72 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -927,13 +927,13 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
static inline int
__copy_xstate_to_kernel(void *kbuf,
const void *data,
- unsigned int pos, unsigned int count, const int start_pos, const int end_pos)
+ unsigned int pos, unsigned int count, int start_pos, int end_pos)
{
if ((count == 0) || (pos < start_pos))
return 0;

if (end_pos < 0 || pos < end_pos) {
- unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos));
+ unsigned int copy = end_pos < 0 ? count : min(count, end_pos - pos);

memcpy(kbuf + pos, data, copy);
}
@@ -1010,13 +1010,13 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int po
}

static inline int
-__copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int pos, unsigned int count, const int start_pos, const int end_pos)
+__copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int pos, unsigned int count, int start_pos, int end_pos)
{
if ((count == 0) || (pos < start_pos))
return 0;

if (end_pos < 0 || pos < end_pos) {
- unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos));
+ unsigned int copy = end_pos < 0 ? count : min(count, end_pos - pos);

if (__copy_to_user(ubuf + pos, data, copy))
return -EFAULT;
@@ -1026,7 +1026,7 @@ __copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int pos, uns

/*
* Convert from kernel XSAVES compacted format to standard format and copy
- * to a ptrace buffer. It supports partial copy but pos always starts from
+ * to a user-space buffer. It supports partial copy but pos always starts from
* zero. This is called from xstateregs_get() and there we check the CPU
* has XSAVES.
*/
\
 
 \ /
  Last update: 2017-09-26 10:31    [W:0.727 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site