lkml.org 
[lkml]   [1998]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Remote fork() and Parallel Programming
    Hi,

    Andrej Presern <andrejp@luz.fe.uni-lj.si> wrote:

    >The checkpoint / continue capability is a prerequisite to process
    >migration, since to be able to migrate a process, one must be able to
    >checkpoint and continue it first, transparency or no transparency. Also,
    >you could have noticed that a checkpoint / continue capability makes a
    >remote fork capability quite trivial, since a copy of the process that
    >has been checkpointed can be continued on as many (remote) nodes as
    >desired [...]

    Yes I know. But I don't know why you are writing such things.

    What I have been trying to say is that these fucntionalities should better
    be provided by the kernel.

    > [...](so much for flexibility, transparency and having hard time
    >duplicating the run-time state of a process).

    You are just dumping the problems of duplicating the run-time state inside
    the checkpoint/restart mechanism. You _will_ need support from the kernel
    to do this. Please remember that the pride of a monolithic OS is in hiding
    its internals from application programs.

    >Also, I have to inform you that any dynamic load balancing decision is
    >at best a speculative one, since a node _cannot_ know how the load will
    >change on a node that we want to migrate to in the future (nor do we
    >know that for the originating node), which means that we might find
    >ourselves filling network bandwidth by copying process data from node to
    >node as load on nodes goes up and down [...]

    Thanks for the information, but not everybody wants to run short java
    programs on a cluster. Many tasks run for long times, which makes an
    assesment of resource-usage possible. Now making load-balancing decisions
    is very necessary. In such environments, process migration helps to remedy
    any mistakes that are made in the load balancing decisions.

    > [...] If you want to do dynamic load
    >balancing, you want to minimize the cost of doing it, so you want to
    >avoid expensive operations, such as copying a process over the network.
    >Instead of balancing the load by process migration, one can do it much
    >more efficiently by stopping objects on overloaded nodes and restarting
    >or continuing others on idle ones (if they are continued, they can take
    >off from where they were stopped; if they are restarted they are reused
    >but they start from the beginning and usually with a different set of
    >data). Dynamic load balancing can be done simply by observing the
    >progress that individual (remote and local) objects of the application
    >make and making a balancing decision based on that [...]

    Objects?? I am not sure if we are talking about the same operating system.

    > [...] And who is more
    >competent in determining the progress of a part of an application than
    >the application itself?

    The progress of an application can sometimes come in conflict with the
    progress of other applications running in the same system. Applications are
    usually too selfish, and should not be allowed to make decisions that affect
    other processes.

    >Also, the all-knowing authority (even if such a god-like object existed)
    >in the system that you refer to is a very bad concept security-wise,
    >since such an object directly violates the principle of least authority.
    >(so much for 'awareness', 'predicting future' and 'all knowing OS').

    Such a god-like object already exists. It is called a Monolithic Operating
    System. I am sorry if you think it has so many bad characteristics, but I
    didn't invent it.

    >To get to more practical points, if we put aside the obvious advantage
    >from the view of performance, that a message that starts an object is
    >obviously much more efficient than copying a process over the network
    >(if you page it in on demand, you still copy it), remote forking is
    >still not a very good thing, especially in a heterogenuous cpu
    >environment. It's much better to 'start' an equivalent object on a
    >remote node that has been built and optimized for the architecture of
    >the node. One should remember that since by connecting into the internet
    >you're essentially connecting into a heterogenuous supercomputer, we
    >might as well do things the right way.

    Quite right. Though one can use a suitably compiled copy of the program on
    a remote machine (so there is no need for code conversion), but converting
    the run-time state to the suitable data representation format of a remote
    node can be very hard or impossible. However, having this ability, even if
    limited to heterogeneous environments, might be very useful.


    Maybe I have not expressed my views very clearly. Very briefly, I would
    rather see the kernel offer high level services like dynamic process
    migration. It is not important which mechanism can simulate the other one.
    The important think is to allow the application programmer to use the cluster
    as easily as using a single computer. Transparency is the keyword here. Such
    services can be offered very transparently thru the kernel, even if they are
    not completely implemented inside the kernel.

    For an example of such a design, you can take a look at DIPC (Distributed
    Inter-process Communication), a system that extends System V's semaphores,
    messages and shared memories to work in a cluster. DIPC's web pages are at
    http://wallybox.cei.net/dipc


    -Kamran Karimi


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu

    \
     
     \ /
      Last update: 2005-03-22 13:43    [W:0.024 / U:213.120 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site