lkml.org 
[lkml]   [2009]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: FatELF patches...
Date
david@lang.hm writes:

>> In terms on disk space on distro TFTP servers only. You'll need to
>> transfer more, both from user's and distro's POV (obviously). This one
>> simple fact alone is more than enough to forget the FatELF.
>
> it depends on if there is only one arch being downloaded ot not.

Well, from user's POV it may get close if the user downloads maybe 5
different archs out of all supported by the distro. Not very typical
I guess.

> it could be considerably cheaper for mirroring bandwidth.

Maybe (though it can be solved with the existing techniques).
What does now count - bandwidth consumed by users or by mirrors?

> Even if Alan
> is correct and distros have re-packaged everything so that the arch
> independant stuff is really in seperate packages, most
> mirroring/repository systems keep each distro release/arch in a
> seperate directory tree, so each of these arch-independant things gets
> copied multiple times.

If it was a (serious) problem (I think it's not), it could be easily
solved. Think rsync, sha1|256-based mirroring stuff etc.

> you don't have to compile multiple arches anymore than you have to
> provide any other support for that arch. FatELF is a way to bundle the
> binaries that you were already creating, not something to force you to
> support an arch you otherwise wouldn't (although if it did make it
> easy enough for you to do so that you started to support additional
> arches, that would be a good thing)

Not sure - longer compile times, longer downloads, no testing.

> if you have a 1M binary with 500M data, repeated for 5 arches it is
> not a win vs a single 505M FatELF package in all cases.

A real example of such binary maybe?
--
Krzysztof Halasa


\
 
 \ /
  Last update: 2009-11-02 21:35    [W:0.320 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site