lkml.org 
[lkml]   [2015]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH 3/3] mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping
    From
    Date
    On Wed, 2015-03-11 at 08:02 +0100, Ingo Molnar wrote:
    > * Toshi Kani <toshi.kani@hp.com> wrote:
    >
    > > This patch adds an additional argument, *uniform, to
    >
    > s/*uniform/'uniform'

    Done.

    > > mtrr_type_lookup(), which returns 1 when a given range is
    > > either fully covered by a single MTRR entry or not covered
    > > at all.
    >
    > s/or not covered/or is not covered

    Done.

    > > pud_set_huge() and pmd_set_huge() are changed to check the
    > > new uniform flag to see if it is safe to create a huge page
    >
    > s/uniform/'uniform'

    Done.

    > > mapping to the range. This allows them to create a huge page
    > > mapping to a range covered by a single MTRR entry of any
    > > memory type. It also detects an unoptimal request properly.
    >
    > s/unoptimal/non-optimal

    Done.

    > or nonoptimal
    >
    > Also, some description in the changelog about what a 'non-optimal'
    > request is would be most userful.
    >
    > > They continue to check with the WB type since the WB type has
    > > no effect even if a request spans to multiple MTRR entries.
    >
    > s/spans to/spans

    Done.

    > > -static inline u8 mtrr_type_lookup(u64 addr, u64 end)
    > > +static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
    > > {
    > > /*
    > > * Return no-MTRRs:
    > > */
    > > + *uniform = 1;
    > > return 0xff;
    > > }
    > > #define mtrr_save_fixed_ranges(arg) do {} while (0)
    > > diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
    > > index cdb955f..aef238c 100644
    > > --- a/arch/x86/kernel/cpu/mtrr/generic.c
    > > +++ b/arch/x86/kernel/cpu/mtrr/generic.c
    > > @@ -108,14 +108,19 @@ static int check_type_overlap(u8 *prev, u8 *curr)
    > > * *repeat == 1 implies [start:end] spanned across MTRR range and type returned
    > > * corresponds only to [start:*partial_end].
    > > * Caller has to lookup again for [*partial_end:end].
    > > + * *uniform == 1 The requested range is either fully covered by a single MTRR
    > > + * entry or not covered at all.
    > > */
    >
    > So I think a better approach would be to count the number of separate
    > MTRR caching types a range is covered by, instead of this hard to
    > quality 'uniform' flag.
    >
    > I.e. a 'nr_mtrr_types' count.
    >
    > If for example a range partially intersects with an MTRR, then that
    > count would be 2: the MTRR, and the outside (default cache policy)
    > type.
    >
    > ( Note that with this approach is not only easy to understand and easy
    > to review, but could also be refined in the future, to count the
    > number of _incompatible_ caching types present within a range. )

    I agree that using a count is more flexible. However, there are some
    issues below.

    - MTRRs have both fixed and variable ranges. The first 1MB is covered
    with 11 fixed-range registers with different sizes of granularity,
    512KB, 128KB, and 32KB. __mtrr_type_lookup() checks the memory type of
    the range at 'start', but does not check if a requested range spans
    multiple memory types. This first 1MB can be handled as 'uniform = 0'
    since processors do not create a huge page map in this 1MB range.
    However, setting a correct value to 'nr_mtrr_types' requires a major
    overhaul in this code.

    - mtrr_type_lookup() returns without walking through all MTRR entries
    when check_type_overlap() returns 1, i.e. the overlap made the resulted
    memory type UC. In this case, the code cannot set a correct value to
    'nr_mtrr_type'.

    Since MTRRs are legacy, esp. the fixed range, there is not much benefit
    from enhancing the functionality of mtrr_type_lookup() unless there is
    an issue with the current platforms. For this patch, we only need to
    know whether the mapping count is 1 or >1. So, I think using 'uniform'
    makes sense for simplicity.

    > > -static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
    > > +static u8 __mtrr_type_lookup(u64 start, u64 end,
    > > + u64 *partial_end, int *repeat, u8 *uniform)
    > > {
    > > int i;
    > > u64 base, mask;
    > > u8 prev_match, curr_match;
    > >
    > > *repeat = 0;
    > > + *uniform = 1;
    > > +
    > > if (!mtrr_state_set)
    > > return 0xFF;
    > >
    > > @@ -128,6 +133,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
    > > /* Look in fixed ranges. Just return the type as per start */
    > > if (mtrr_state.have_fixed && (start < 0x100000)) {
    > > int idx;
    > > + *uniform = 0;
    >
    > So this function scares me, because the code is clearly crap:
    >
    > if (mtrr_state.have_fixed && (start < 0x100000)) {
    > ...
    > } else if (start < 0x1000000) {
    > ...
    >
    > How can that 'else if' branch ever not be true?

    This 'else if' is always true. So, it can be simply 'else' without any
    condition.

    > Did it perhaps want to be the other way around:
    >
    > if (mtrr_state.have_fixed && (start < 0x1000000)) {
    > ...
    > } else if (start < 0x100000) {
    > ...
    >
    > or did it simply mess up the condition?

    I think it was just paranoid to test the same condition twice...

    > >
    > > if (start < 0x80000) {
    > > idx = 0;
    > > @@ -195,6 +201,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
    > >
    > > end = *partial_end - 1; /* end is inclusive */
    > > *repeat = 1;
    > > + *uniform = 0;
    > > }
    > >
    > > if (!start_state)
    > > @@ -206,6 +213,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
    > > continue;
    > > }
    > >
    > > + *uniform = 0;
    > > if (check_type_overlap(&prev_match, &curr_match))
    > > return curr_match;
    > > }
    >
    >
    > > @@ -222,17 +230,21 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
    > > }
    > >
    > > /*
    > > - * Returns the effective MTRR type for the region
    > > + * Returns the effective MTRR type for the region. *uniform is set to 1
    > > + * when a given range is either fully covered by a single MTRR entry or
    > > + * not covered at all.
    > > + *
    > > * Error return:
    > > * 0xFF - when MTRR is not enabled
    > > */
    > > -u8 mtrr_type_lookup(u64 start, u64 end)
    > > +u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
    > > {
    > > - u8 type, prev_type;
    > > + u8 type, prev_type, is_uniform, dummy;
    > > int repeat;
    > > u64 partial_end;
    > >
    > > - type = __mtrr_type_lookup(start, end, &partial_end, &repeat);
    > > + type = __mtrr_type_lookup(start, end,
    > > + &partial_end, &repeat, &is_uniform);
    > >
    > > /*
    > > * Common path is with repeat = 0.
    > > @@ -242,12 +254,18 @@ u8 mtrr_type_lookup(u64 start, u64 end)
    > > while (repeat) {
    > > prev_type = type;
    > > start = partial_end;
    > > - type = __mtrr_type_lookup(start, end, &partial_end, &repeat);
    > > + is_uniform = 0;
    > >
    > > - if (check_type_overlap(&prev_type, &type))
    > > + type = __mtrr_type_lookup(start, end,
    > > + &partial_end, &repeat, &dummy);
    > > +
    > > + if (check_type_overlap(&prev_type, &type)) {
    > > + *uniform = 0;
    > > return type;
    > > + }
    > > }
    > >
    > > + *uniform = is_uniform;
    > > return type;
    >
    > So the MTRR code is from hell, it would be nice to first clean up the
    > whole code and the MTRR data structures before extending it with more
    > complexity ...

    Good idea. I will clean up the code (no functional change) before
    making this change.

    Thanks,
    -Toshi



    \
     
     \ /
      Last update: 2015-03-11 18:21    [W:2.286 / U:0.128 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site