lkml.org 
[lkml]   [2023]   [Mar]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH 2/8] of: Enable DTB loading on UML for KUnit tests
    From
    On 3/13/23 11:02, Frank Rowand wrote:
    > On 3/11/23 00:42, David Gow wrote:
    >> On Sat, 11 Mar 2023 at 07:34, Stephen Boyd <sboyd@kernel.org> wrote:
    >>>
    >>> Quoting David Gow (2023-03-10 00:09:48)
    >>>> On Fri, 10 Mar 2023 at 07:19, Stephen Boyd <sboyd@kernel.org> wrote:
    >>>>>
    >>>>>
    >>>>> Hmm. I think you're suggesting that the unit test data be loaded
    >>>>> whenever CONFIG_OF=y and CONFIG_KUNIT=y. Then tests can check for
    >>>>> CONFIG_OF and skip if it isn't enabled?
    >>>>>
    >>>>
    >>>> More of the opposite: that we should have some way of supporting tests
    >>>> which might want to use a DTB other than the built-in one. Mostly for
    >>>> non-UML situations where an actual devicetree is needed to even boot
    >>>> far enough to get test output (so we wouldn't be able to override it
    >>>> with a compiled-in test one).
    >>>
    >>> Ok, got it.
    >>>
    >>>>
    >>>> I think moving to overlays probably will render this idea obsolete:
    >>>> but the thought was to give test code a way to check for the required
    >>>> devicetree nodes at runtime, and skip the test if they weren't found.
    >>>> That way, the failure mode for trying to boot this on something which
    >>>> required another device tree for, e.g., serial, would be "these tests
    >>>> are skipped because the wrong device tree is loaded", not "I get no
    >>>> output because serial isn't working".
    >>>>
    >>>> Again, though, it's only really needed for non-UML, and just loading
    >>>> overlays as needed should be much more sensible anyway.
    >>>
    >>> I still have one niggle here. Loading overlays requires
    >>> CONFIG_OF_OVERLAY, and the overlay loading API returns -ENOTSUPP when
    >>> CONFIG_OF_OVERLAY=n. For now I'm checking for the config being enabled
    >>> in each test, but I'm thinking it may be better to simply call
    >>> kunit_skip() from the overlay loading function if the config is
    >>> disabled. This way tests can simply call the overlay loading function
    >>> and we'll halt the test immediately if the config isn't enabled.
    >>>
    >>
    >> That sounds sensible, though there is a potential pitfall. If
    >> kunit_skip() is called directly from overlay code, might introduce a
    >> dependency on kunit.ko from the DT overlay, which we might not want.
    >> The solution there is either to have a kunit wrapper function (so the
    >> call is already in kunit.ko), or to have a hook to skip the current
    >> test (which probably makes sense to do anyway, but I think the wrapper
    >> is the better option).
    >>
    >>
    >>>>
    >>>>>>
    >>>>>> That being said, I do think that there's probably some sense in
    >>>>>> supporting the compiled-in DTB as well (it's definitely simpler than
    >>>>>> patching kunit.py to always pass the extra command-line option in, for
    >>>>>> example).
    >>>>>> But maybe it'd be nice to have the command-line option override the
    >>>>>> built-in one if present.
    >>>>>
    >>>>> Got it. I need to test loading another DTB on the commandline still, but
    >>>>> I think this won't be a problem. We'll load the unittest-data DTB even
    >>>>> with KUnit on UML, so assuming that works on UML right now it should be
    >>>>> unchanged by this series once I resend.
    >>>>
    >>>> Again, moving to overlays should render this mostly obsolete, no? Or
    >>>> am I misunderstanding how the overlay stuff will work?
    >>>
    >>> Right, overlays make it largely a moot issue. The way the OF unit tests
    >>> work today is by grafting a DTB onto the live tree. I'm reusing that
    >>> logic to graft a container node target for kunit tests to add their
    >>> overlays too. It will be clearer once I post v2.
    >>>
    >>>>
    >>>> One possible future advantage of being able to test with custom DTs at
    >>>> boot time would be for fuzzing (provide random DT properties, see what
    >>>> happens in the test). We've got some vague plans to support a way of
    >>>> passing custom data to tests to support this kind of case (though, if
    >>>> we're using overlays, maybe the test could just patch those if we
    >>>> wanted to do that).
    >>>
    >>> Ah ok. I can see someone making a fuzzer that modifies devicetree
    >>> properties randomly, e.g. using different strings for clock-names.
    >>>
    >>> This reminds me of another issue I ran into. I wanted to test adding the
    >>> same platform device to the platform bus twice to confirm that the
    >>> second device can't be added. That prints a warning, which makes
    >>> kunit.py think that the test has failed because it printed a warning. Is
    >>> there some way to avoid that? I want something like
    >>>
    >>> KUNIT_EXPECT_WARNING(test, <call some function>)
    >>>
    >>> so I can test error cases.
    >
    > DT unittests already have a similar concept. A test can report that a
    > kernel warning (or any other specific text) either (1) must occur for the
    > test to pass or (2) must _not_ occur for the test to pass. The check
    > for the kernel warning is done by the test output parsing program
    > scripts/dtc/of_unittest_expect.
    >
    > The reporting by a test of an expected error in drivers/of/unittest.c
    > is done by EXPECT_BEGIN() and EXPECT_END(). These have been in
    > unittest for a long time.
    >
    > The reporting by a test of a not expected to occur error is done
    > by EXPECT_NOT_BEGIN() and EXPECT_NOT_END(). These are added to
    > unittest in linux 6.3-rc1.
    >
    > I discussed this concept in one of the early TAP / KTAP discussion

    The link to the early KTAP discussion on this concept is:

    https://lore.kernel.org/all/d38bf9f9-8a39-87a6-8ce7-d37e4a641675@gmail.com/T/#u


    > threads and expect to start a discussion thread on this specific
    > topic in the KTAP Specification V2 context. I expect the discussion
    > to result in a different implementation than what DT unittests are
    > using (bike shedding likely to ensue) but whatever is agreed to
    > should be easy for DT to switch to.

    The link to the KTAP Specification Version 2 process and progress is:

    https://elinux.org/Test_Results_Format_Notes#KTAP_version_2

    -Frank

    >
    >>
    >> Hmm... I'd've thought that shouldn't be a problem: kunit.py should
    >> ignore most messages during a test, unless it can't find a valid
    >> result line. What does the raw KTAP output look like? (You can get it
    >> from kunit.py by passing the --raw_output option).
    >>
    >> That being said, a KUNIT_EXPECT_LOG_MESSAGE() or similar is something
    >> we've wanted for a while. I think that the KASAN folks have been
    >> working on something similar using console tracepoints:
    >> https://lore.kernel.org/all/ebf96ea600050f00ed567e80505ae8f242633640.1666113393.git.andreyknvl@google.com/
    >>
    >> Cheers,
    >> -- David
    >

    \
     
     \ /
      Last update: 2023-03-27 00:57    [W:2.157 / U:0.668 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site