lkml.org 
[lkml]   [2005]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Linux Kernel Dump Summit 2005
From
On Wed, 12 Oct 2005, Felix Oxley wrote:
>
> On Wednesday 12 October 2005 09:28, OBATA Noboru wrote:
>
> > CMD | NET TIME (in seconds) | OUTPUT SIZE (in bytes)
> > ---------+--------------------------------+------------------------
> > cp | 35.94 (usr 0.23, sys 14.16) | 2,121,438,352 (100.0%)
> > lzf | 54.30 (usr 35.04, sys 13.10) | 1,959,473,330 ( 92.3%)
> > gzip -1 | 200.36 (usr 186.84, sys 11.73) | 1,938,686,487 ( 91.3%)
> > ---------+--------------------------------+------------------------
> >
> > Although it is too early to say lzf's compress ratio is good
> > enough, its compression speed is impressive indeed.
>
> As you say, the speed of lzf relative to gzip is impressive.
>
> However if the properties of the kernel dump mean that it is not suitable for
> compression then surely it is not efficient to spend any time on it.

Sorry, my last result was misleading. The dumpfile used above
was the one which showed the _worst_ compression ratio. So it
does not necessarily mean that kernel dump is not suitable for
compression.

I will retest with the normal dumpfiles.

> >And the
> > result also suggests that it is too early to give up the idea of
> > full dump with compression.
>
> Are you sure? :-)
> If we are talking about systems with 32GB of memory then we must be taking
> about organisations who can afford an extra 100GB of disk space just for
> keeping their kernel dump files.
>
> I would expect that speed of recovery would always be the primary concern.
> Would you agree?

Well, it should depend on a user. It seems to be a trade off
between resource-for-dump and problem-traceability.

I have a bitter experience in analyzing a partial dump. The
dump completely lacks the PTE pages of user processes and I had
to give up analysis then. A partial dump has a risk of failure
in analysis.

So what I'd suggest is that the level of partial dump should be
tunable by a user, when implemented as a kdump functionality.
Then, a user who want faster recovery or has a limited storage
may choose a partial dump, and a careful user who has plenty of
storage may choose a full dump.

By the way, if the speed of recovery is _really_ the primary
concern for a user, I'd suggest the user to form a cluster to
continue operation by failover. (Then the crashed node should
have enough time to generate a full dump.)

Regards,

--
OBATA Noboru (noboru.obata.ar@hitachi.com)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-18 15:51    [W:0.381 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site