lkml.org 
[lkml]   [2010]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] Unify KVM kernel-space and user-space code into a single project

* Zachary Amsden <zamsden@redhat.com> wrote:

> On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> >* Avi Kivity<avi@redhat.com> wrote:
> >
> >>>The moment any change (be it as trivial as fixing a GUI detail or as
> >>>complex as a new feature) involves two or more packages, development speed
> >>>slows down to a crawl - while the complexity of the change might be very
> >>>low!
> >>Why is that?
> >It's very simple: because the contribution latencies and overhead compound,
> >almost inevitably.
> >
> >If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> >...
> >
> >Even with the best-run projects in existence it takes forever and is very
> >painful - and here i talk about first hand experience over many years.
>
> Ingo, what you miss is that this is not a bad thing. Fact of the
> matter is, it's not just painful, it downright sucks.

Our experience is the opposite, and we tried both variants and report about
our experience with both models honestly.

You only have experience about one variant - the one you advocate.

See the assymetry?

> This is actually a Good Thing (tm). It means you have to get your
> feature and its interfaces well defined and able to version forwards
> and backwards independently from each other. And that introduces
> some complexity and time and testing, but in the end it's what you
> want. You don't introduce a requirement to have the feature, but
> take advantage of it if it is there.
>
> It may take everyone else a couple years to upgrade the compilers,
> tools, libraries and kernel, and by that time any bugs introduced by
> interacting with this feature will have been ironed out and their
> patterns well known.

Sorry, but this is pain not true. The 2.4->2.6 kernel cycle debacle has taught
us that waiting long to 'iron out' the details has the following effects:

- developer pain
- user pain
- distro pain
- disconnect
- loss of developers, testers and users
- grave bugs discovered months (years ...) down the line
- untested features
- developer exhaustion

It didnt work, trust me - and i've been around long enough to have suffered
through the whole 2.5.x misery. Some of our worst ABIs come from that cycle as
well.

So we first created the 2.6.x process, then as we saw that it worked much
better we _sped up_ the kernel development process some more, to what many
claimed was an impossible, crazy pace: two weeks merge window, 2.5 months
stabilization and a stable release every 3 months.

And you can also see the countless examples of carefully drafted, well thought
out, committee written computer standards that were honed for years, which are
not worth the paper they are written on.

'extra time' and 'extra buerocratic overhead to think things through' is about
the worst thing you can inject into a development process.

You should think about the human brain as a cache - the 'closer' things are
both in time and pyshically, the better they end up being. Also, the more
gradual, the more concentrated a thing is, the better it works out in general.
This is part of the basic human nature.

Sorry, but i really think you are really trying to rationalize a disadvantage
here ...

Ingo


\
 
 \ /
  Last update: 2010-03-18 22:17    [W:0.297 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site