lkml.org 
[lkml]   [2011]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 5/9] HWPoison: add memory_failure_queue()

* huang ying <huang.ying.caritas@gmail.com> wrote:

> On Sun, May 22, 2011 at 6:00 PM, Ingo Molnar <mingo@elte.hu> wrote:
> >
> > * huang ying <huang.ying.caritas@gmail.com> wrote:
> >
> >> On Fri, May 20, 2011 at 7:56 PM, Ingo Molnar <mingo@elte.hu> wrote:
> >> >
> >> > * Huang Ying <ying.huang@intel.com> wrote:
> >> >
> >> >> > So why are we not working towards integrating this into our event
> >> >> > reporting/handling framework, as i suggested it from day one on when you
> >> >> > started posting these patches?
> >> >>
> >> >> The memory_failure_queue() introduced in this patch is general, that is, it
> >> >> can be used not only by ACPI/APEI, but also any other hardware error
> >> >> handlers, including your event reporting/handling framework.
> >> >
> >> > Well, the bit you are steadfastly ignoring is what i have made clear well
> >> > before you started adding these facilities: THEY ALREADY EXISTS to a large
> >> > degree :-)
> >> >
> >> > So you were and are duplicating code instead of using and extending existing
> >> > event processing facilities. It does not matter one little bit that the code
> >> > you added is partly 'generic', it's still overlapping and duplicated.
> >>
> >> How to do hardware error recovering in your perf framework?  IMHO, it can be
> >> something as follow:
> >>
> >> - NMI handler run for the hardware error, where hardware error
> >> information is collected and put into a ring buffer, an irq_work is
> >> triggered for further work
> >> - In irq_work handler, memory_failure_queue() is called to do the real
> >> recovering work for recoverable memory error in ring buffer.
> >>
> >> What's your idea about hardware error recovering in perf?
> >
> > The first step, the whole irq_work and ring buffer already looks largely
> > duplicated: you can collect into a perf event ring-buffer from NMI context like
> > the regular perf events do.
>
> Why duplicated? perf uses the general irq_work too.

Yes, of course, because - if you still remember - Peter split irq_work out of
perf events:

e360adbe2924: irq_work: Add generic hardirq context callbacks

|
| Perf currently has such a mechanism, so extract that and provide it as a
| generic feature, independent of perf so that others may also benefit.
|

:-)

But in hindsight the level of abstraction (for this usecase) was set too low,
because we lose wider access to the actual events themselves:

> > The generalization that *would* make sense is not at the irq_work level
> > really, instead we could generalize a 'struct event' for kernel internal
> > producers and consumers of events that have no explicit PMU connection.
> >
> > This new 'struct event' would be slimmer and would only contain the fields
> > and features that generic event consumers and producers need. Tracing
> > events could be updated to use these kinds of slimmer events.
> >
> > It would still plug nicely into existing event ABIs, would work with event
> > filters, etc. so the tooling side would remain focused and unified.
> >
> > Something like that. It is rather clear by now that splitting out irq_work
> > was a mistake. But mistakes can be fixed and some really nice code could
> > come out of it! Would you be interested in looking into this?
>
> Yes. This can transfer hardware error data from kernel to user space. Then,
> how to do hardware error recovering in this big picture? IMHO, we will need
> to call something like memory_failure_queue() in IRQ context for memory
> error.

That's where 'active filters' come into the picture - see my other mail (that
was in the context of unidentified NMI errors/events) where i outlined how they
would work in this case and elsewhere. Via active filters we could share most
of the code, gain access to the events and still have kernel driven policy
action.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-05-22 15:27    [W:2.128 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site