lkml.org 
[lkml]   [1999]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectCapabilities, wish list, and flame bait
First off, I love Linux and have been using it since the early '90s.
I've implemented a number of client systems using Linux and used
Linux as a development platform for customers using SysV and AIX.

As Linux has grown, the complexity of using the system has grown even
faster. By using I mean as a desktop platform. As a server system you
expect some difficulty and complexity, but then someone is usually
responsible (and paid?) to understand and tolerate that complexity.
As a desktop system, the user typically doesn't have the time or
ability to invest in the administration of the system.

The core concepts of Unix in general are over 20 years old. This
age tends to show up in some strange places. I think the two biggest
places this age is apparent is in configuration and file structure.

Installers are nice, but they are a band-aid on the system to hide
real difficulties. Recent discussions of capabilities also have
an underlying difficulty. Notice that both of these issues have a
common issue. Not security (a different discussion), but the packaging
of the meta-data (that is, descriptive information about the data).

I've got a small system and tend to install new stuff to try it out
and then often end up removing it later. Given a small disk, the
make file is no longer around for a 'make uninstall' - as if all
make files have it. I don't like RPMs as it assumes a configuration
that is often different than that of the software author. The result
is I've got lots of little pieces sitting around that are not used
by any application anymore, but I can't tell by looking what it
belongs to.

What does this have to do with capabilities? If a solution is found
for storing meta-data (capabilities) for arbitrary executables and
scripts, then a direction is in place to store other meta-data. For
example, why do we have to have icons, configuration files, logs,
and whatever splattered all over the file system(s)?

I've always been a fan of the (old?) Mac files that contain everything
about an application in a single file. I don't recommend that exact
approach, but the concept is good. So the following are a series of
ideas for the Linux 2.3.x wish-list that may spark some debate. These
are classified as Short Term (ST) and Long Term (LT). Feel free to
define these terms as you see fit.

* (ST) Break the assumption that an executable is a file. Take two
unused bits in the file flags to indicate the meta-data type as;

0 Unknown Format
1 No Meta-data
2 Meta-data directory
3 Meta-data file (see LT version later)

The first time the VFS encounters a '0' it can check the entry to see
if it has meta-data and set the type appropriately. This way old
directory entries are not affected. Foreign file systems can always
return '0' which will trigger a (slow) check in the VFS.

A file without meta-data - all existing files - get a tag of '1' and
the VFS and kernel need do nothing different than is currently done.

A directory that contains meta-data gets a tag of '2' and both the
VFS and kernel handle it differently. The directory will contain
files and other directories. Special file names in the top level
directory have an assumed type. A specific simple case (for
capabilities) could be;

/bin/bash
|
+- bash.capabilities
+- bash
+- bash.filemap
+- bashrc
+- bash.1.gz
+- bashbug.1.gz

Note that an 'exec' of bash now references a directory but the kernel
executes the binary in "bash/bash" which could be in a.out, elf, or
another executable format. The executable capabilities are taken from
the obvious file. This is equivalent to the elf capabilities patches
in that they have to be located and processed. Only now the process
is done outside the bin format handler. The question of tools to set
and manage capabilities is becomes moot as standard file tools may be
used (assuming text capabilities).

The file map is used to redirect file and directory access from legacy

applications that don't know about the new structure. It also improves

'packaging' as everything about an application is in one location. An
application that needs to scan its own code would open, in this
example, '/bin/bash' which would be redirected to '/bin/bash/bash'
silently. Slows older apps down, but anyone can re-package an
application without source changes.

Sending a copy of bash to a system that doesn't support these changes
only means that /usr/bash/bash is sent rather than /bin/bash.
Likewise,
adding capabilities to an existing application is just a little work
with existing tools. Nothing is lost in cross platform transfers.

A more complex structure would account for applications that have
more than one executable. Take a hypothetical mail system that
could be structured as;

/usr/bin/hypmail
|
+- send
| |
| +- send.capabilties
| +- send
| +- send.filemap
+- recv
| |
| :
|
+- stat
| |
| :
|
+- mail.cap
+- mail.rc
+- send.1
+- recv.1
+- stat.1
+- mailcap.8
+- mail.rc.8
|
+- .icons
|
+- no-mail.icn
+- have-mail.icn

Now the top level directory is a package rather than an executable
alias. We can have many 'send' executables and the user needs to say
which is to run. This would be done by executing 'hypmail/send' to
select the send in the hypmail package.

This points out the single largest (I think) change needed to adopt
such a hierarchy. The search path used in libraries needs to use
PATH to look for directories. If the directory contains a default
executable, in the form <name>/<name>, and <name> is the last part
of the command given, then it is executed. Otherwise it fails.

If searching a PATH entry hits a component of the command it needs
to search the directory. That is, the command "hypmail/send" checks
the PATH for hypmail and if found looks inside for a send. This
would need to be done after standard PATH searching to preserve the
current expectations for relative path handling. This ordering also
allows the hypmail directory to be in the PATH so 'send' by itself
defaults to the one in hypmail.

There would be other tool changes to completely handle this structure.

The 'man' commands, for example, would need to look for manuals in
a different manner. On the other hand, a utility that creates and
destroys links based upon file maps could be created.

Notice also that I have snuck in an '.icon' resources directory
which could be supported by special run-time library routines. Not
a kernel issue, but included for completeness.

Capabilties, in this scheme, may be shared across a network by
including the top directory level. This makes complete systems more
manageable. If individual systems need specific capabilities, they
can link to the executable in the directory (from a capabilities
endowed directory of course).

* (LT) Since the above scheme effectively creates an executable package,

why not go all the way and allow the kernel to execute a segment
from within a single file. The meta-data tag '3' is reserved for this
form. In effect this would be a file level 'loop' like device that
bypasses mounting and unmounting - binderfs?. A set of functions
would be built to (in device/file form) to manipulate the contents
of this file type.

Since it is not a directory and it is not a file, I like to think
of these special files as 'binders.' Picture a three ring binder
filled with papers and divided into sections. The same sort of
structure represented above.

System calls would need to distinguish between treating the binder
as a directory and treating it as a file. For example, the command
"cp hypmail newmail" would treat a binder as a file and, in effect,
create a new application. The command "cp hypmail/send newsend"
would treat the binder as a directory and the 'send' entry as a
file.

The net effect is the same as the above (ST) bullet, but we save
inodes (is this an issue?) and permit the entire package to be
operated on as a whole. (Of course, this could be achieved with
actual directories since we are changing kernel system calls).

One real advantage of this form would be that file systems with
limited file names, inodes, etc., could still store an entire
binder file without loss of information. We could also share
64 bit binders with 32 bit systems with a little work.

Another difference would be an extended directory entry in this
form. Such an entry, inside the binder, would include the entry
type and format as might be derived from a 'file' or 'mime' type
now.

And finally, there should be some form of signature on a binder
so that we known when it has been modified. This could be used
to warn a user or administrator that the binder being installed
is not in a form originally intended by the author. Maybe someone
has changed the capabilities on us?

* (LT) This has probably already been addressed in the capabilities
discussions. Most of what I've seen has focused on the executable's
capabilties. There should also be user and group capabilities.
A system administrator may wish to remove an entire group's ability
to run suid programs. A user (and his shell) gets his individual
capabilties and the 'or' of all his groups.

* (ST) Add frame buffer device drivers for older VESA video cards.
Essentially this would be a core chunk and a series of modules.
Core portion would perform the card auto-detection logic and then
load and initialize appropriate module(s). Modules would correspond
to the card family and implement an API for the X Frame Buffer code.
Seems that the best place to start would be moving XFree code into
kernel modules.

This has the side effect of allowing the various graphics developers
to concentrate of their specific targets. Game engines to implement
specific polygon algorithms can assume a minimum system capability
to build upon. There will, I assume, be non-Frame Buffer extensions
to the API to permit specialized operations for non-X interfaces. I
haven't looked at either the Frame Buffer code nor, say, GGI(?) to
really understand the differences.

* (LT) Do away with X as primary graphics engine! The days where
specialized display devices (X terminal) were cost effective are
long gone. X assumes a central intelligence (the client) driving
a cheap and dumb display (the server). Given today's (and I presume
tomorrow's) hardware capabilities, the display devices are typically
intelligent and powerful.

This is the same assumption made of Web Browsers where the display
device is charged with executing the layout. Using X's assumption
of relatively dumb devices, the communication between the computer
and the display device is based upon low level primitives. If we can
identify an appropriate interface point, the communications could be
based upon high level directives. This off-loads both the central
computer (I'm avoiding the term server since X is involved) and the
communication medium(s). Such off-loading should also improve overall
performance of remote displays which can get extremely bad in some
environments. I've had client systems become nearly un-usable because
of this alone.

Display hardware missing some component, such as a diskless work-
station, would still have all the other network protocols to call
upon. The NFS, CODA, SMB interface is optimized for an explicit
purpose and can be assumed (safely?) to perform better than an
equivalent embedding inside another protocol. Maybe there are new
protocols needed for special purposes such as fetching a font or
pieces of a font from a font server (yes I known there is such a
thing, but is it optimal in a different display architecture?).

Where do you start such a search? Perhaps X11 again. Is there a
higher level at which the computer/display allocation may be
defined? Other candidates may exist embedded in other software.
The HTML family is such a concept, but lacks (I believe) the
precision and control necessary. It also suffers from a lack of
a binary protocol to reduce overhead. Display Postscript may have
the precision, but also lacks binary and control features. I
don't have the answers, just the questions.

* (LT) I've always thought that controlling displays at the pixel level
to be primitive. Can we devise a means to address a display relative
to actual sizes, percentages of available pixels, and/or other
variable
specifications. For example, Some X applications assume that you have
at least a certain size display (say 1024x768) and you can't get at
buttons on the bottom of the window with lower resolutions unless
you have panning. Or you have a high resolution display and the
program assumes low resolution resulting in a tiny, un-readable
little window. Applications could include scaling transformations, but

wouldn't it be nice if the display driver did this in a standard and
consistent manner. Likewise, some applications need a specific size
for a displayed item - say 3cm x 6cm. It would be nice to have the
driver know how to handle such dimensions. I realize this is typically

a user space library issue, but if the display driver is in the
kernel, perhaps such transformations are as well.

Flame at will ... email is jmills@portland.quik.com



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.031 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site