lkml.org 
[lkml]   [2009]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: FatELF patches...
From
Date
On Tue, 2009-11-03 at 01:43 -0500, Eric Windisch wrote:
> First, I apologize if this message gets top-posted or otherwise
> improperly threaded, as I'm not currently a subscriber to the list (I
Given proper references:-headers, the mail should have threaded
properly.
> can no longer handle the daily traffic). I politely ask that I be CC'ed
> on any replies.
Which raises the question why you didn't cc: anyone in the first place.

> In response to Alan's request for a FatELF use-case, I'll submit two of
> my own.
>
> I have customers which operate low-memory x86 virtual machine instances.
Low resource environments (be it embedded or not) are probably the last
who wants (or even can handle) such "bloat by design".
The question in that world is not "how can I make it run on more
architectures" but "how can I get rid of run-time code as soon as
possible".

> Until recently, these ran with as little as 64MB of RAM. Many customers
> have chosen 32-bit distributions for these systems, but would like the
> flexibility of scaling beyond 4GB of memory. These customers would like
> the choice of migrating to 64-bit without having to reinstall their
> distribution.
Just install a 64bit kernel (and leave the user-space intact). A 64bit
kernel can run 32bit binaries.

> Furthermore, I'm involved in several "cloud computing" initiatives,
> including interoperability efforts. There has been discussion of
The better solution is probably to agree on pseudo-machine-code (like
e.g. JVM, parrot, or whatever) with good interpreters/JIT-compilers
which focus more on security and how to validate potentially hostile
programs than anything else.

> assuring portability of virtual machine images across varying
> infrastructure services. I could see how FatELF could be part of a
> solution to this problem, enabling a single image to function against
> host services running a variety of architectures.
Let's hope that the n versions in a given FatElf image actually are
instances of the same source.

[....]
> I concede that there are a number of ways that solutions to these
> problems might be implemented, and FatELF binaries might not be the
> optimal solution. Regardless, I do feel that use cases do exist, even
> if there are questions and concerns about the implementation.
The obvious drawbacks are:
- Even if disk space is cheap, the vast amount is a problem for
mirroring that stuff.
- Fat-Binaries (ab)use more Internet bandwidth. Hell, Fedora/RedHat got
delta-RPMS working (just?) for this reason.
- Fat-Binaries (ab)use much more memory and I/O bandwidth - loading code
for n architectures and throw n-1 of it away doesn't sound very sound.
- Compiling+linking for n architectures needs n-1 cross-compilers
installed and working.
- Compiling+linking for n architectures needs much more *time* than for
1 (n times or so).
Guess what people/developers did first on the old NeXT machines: They
disable the default "build for all architectures" as it speeded things
up.
Even if the expected development setup is "build for local only", at
least packagers and regression testers won't have the luxury of that.

The only remotely useful benefit in the long run I can imagine is: The
permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
the alternatives are applicable without reading the generated
configure.sh (and config.log) to guess how to tell the script some
details.
But that isn't really worth it - as we are living without it for long.

Bernd
--
Firmix Software GmbH http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
Embedded Linux Development and Services




\
 
 \ /
  Last update: 2009-11-03 12:29    [W:0.059 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site