lkml.org 
[lkml]   [2009]   [Feb]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH 8/8] trace: trivial fixes in comment typos.
    From: Wenji Huang <wenji.huang@oracle.com>

    Impact: clean up

    Fixed several typos in the comments.

    Signed-off-by: Wenji Huang <wenji.huang@oracle.com>
    Signed-off-by: Steven Rostedt <srostedt@redhat.com>
    ---
    include/linux/ftrace.h | 2 +-
    kernel/trace/ftrace.c | 6 +++---
    kernel/trace/trace.h | 6 +++---
    3 files changed, 7 insertions(+), 7 deletions(-)

    diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
    index 7840e71..5e302d6 100644
    --- a/include/linux/ftrace.h
    +++ b/include/linux/ftrace.h
    @@ -140,7 +140,7 @@ static inline int ftrace_disable_ftrace_graph_caller(void) { return 0; }
    #endif

    /**
    - * ftrace_make_nop - convert code into top
    + * ftrace_make_nop - convert code into nop
    * @mod: module structure if called by module load initialization
    * @rec: the mcount call site record
    * @addr: the address that the call site should be calling
    diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
    index 6861003..1796e01 100644
    --- a/kernel/trace/ftrace.c
    +++ b/kernel/trace/ftrace.c
    @@ -465,7 +465,7 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
    * it is not enabled then do nothing.
    *
    * If this record is not to be traced and
    - * it is enabled then disabled it.
    + * it is enabled then disable it.
    *
    */
    if (rec->flags & FTRACE_FL_NOTRACE) {
    @@ -485,7 +485,7 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
    if (fl == (FTRACE_FL_FILTER | FTRACE_FL_ENABLED))
    return 0;

    - /* Record is not filtered and is not enabled do nothing */
    + /* Record is not filtered or enabled, do nothing */
    if (!fl)
    return 0;

    @@ -507,7 +507,7 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)

    } else {

    - /* if record is not enabled do nothing */
    + /* if record is not enabled, do nothing */
    if (!(rec->flags & FTRACE_FL_ENABLED))
    return 0;

    diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
    index 5efc4c7..f92aba5 100644
    --- a/kernel/trace/trace.h
    +++ b/kernel/trace/trace.h
    @@ -616,12 +616,12 @@ extern struct tracer nop_trace;
    * preempt_enable (after a disable), a schedule might take place
    * causing an infinite recursion.
    *
    - * To prevent this, we read the need_recshed flag before
    + * To prevent this, we read the need_resched flag before
    * disabling preemption. When we want to enable preemption we
    * check the flag, if it is set, then we call preempt_enable_no_resched.
    * Otherwise, we call preempt_enable.
    *
    - * The rational for doing the above is that if need resched is set
    + * The rational for doing the above is that if need_resched is set
    * and we have yet to reschedule, we are either in an atomic location
    * (where we do not need to check for scheduling) or we are inside
    * the scheduler and do not want to resched.
    @@ -642,7 +642,7 @@ static inline int ftrace_preempt_disable(void)
    *
    * This is a scheduler safe way to enable preemption and not miss
    * any preemption checks. The disabled saved the state of preemption.
    - * If resched is set, then we were either inside an atomic or
    + * If resched is set, then we are either inside an atomic or
    * are inside the scheduler (we would have already scheduled
    * otherwise). In this case, we do not want to call normal
    * preempt_enable, but preempt_enable_no_resched instead.
    --
    1.5.6.5
    --


    \
     
     \ /
      Last update: 2009-02-08 06:59    [W:0.031 / U:1.876 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site