lkml.org 
[lkml]   [1998]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject/proc format: standard representation, parsing
I can think of two sensible approaches for the encoding of /proc files.
This is relevant less to file structure than to encoding of the files.

One is to use ASN.1. There are probably as many criticisms of it as of any
"portable" electronic data exchange format out there, but having the data
already in this format would make writing snmp MIBs that use information
in /proc really easy: mmap() the proc file(s), create the data structure
that represents the MIB block with individual structure members as derefed
pointers into one or more mmap()ed /proc files. This makes for a fast
snmp retrieve: it only has to find the stuff, not reformat it from some
linux-only data format to ASN.1 before using it.

Standard tools like procps could use the ASN.1 parsers already written
for free snmp servers, or hack their own if "you don't like what you see
here". It would be a positive synergy: the linux ports of the snmp servers
would get the benefit of more skilled people looking at their ASN.1
parsers and the /proc-parsing tools would get the benefit of improvements
to the parsers discovered by the snmp server developers themselves. I
haven't looked at the specific linux-ported snmp servers presently
available (i.e, are they written in C or not), but this synergy is what
should happen given honest commitment to solving the problem with an
ASN.1 encoding.

Alternatively, one could use XML, and have /proc parsing tools take the
same approach that web browsers take: if you don't understand a tag, that
element isn't there. This makes it possible to extend the format of a
/proc file with new data items without breaking old tools that haven't
been re-written yet to use the new information.

XML has no formatting information, just character data, so how you format
it for display is up to whatever software is processing the /proc file.
This is good. This takes anything beyond correct utf-8 representation for
the current locale out of the filesystem.

It doesn't have to be pure XML, although the closer you get to that the
easier it is to leverage existing XML parsing code. A couple of
enhancements to an XML encoding of /proc files:

first tag in a file:

<XML endian=[0|1]> --identify the endianness of any pure binary
data in the whole file--
tag for binary data element:
<bogomips format=binary type=int length=[digit] prec=1>[some bytes]
</bogomips>
-- Did they change the end tag? Or only for <empty/> elements?--

alternatively:
<bogomips format=binary type=float length=[digit] endian=[0|1]
prec=1>[some bytes]</bogomips>

The format attribute only has two possible values, binary or utf-8
character data, with character being the default, so you only need to put
it there for binary data. In the first form, you just parse an int, and
"prec=" tells you where to insert the decimal point. In the second form,
you parse a float and prec= tells you how many digits to display of the
fractional part.

I think setting endianness for binary data in the whole file makes more
sense than making it a binary element attribute. You might have some
elements that you could optimize /proc display of slightly by having them
in network byte order even on little-endian machines, but it isn't worth
checking (and encoding in the first place) the endian attribute on every
binary element for the sake of that.

The length= attribute on binary data would be an absolute requirement,
and this is the place where you'd have to change the rules that a standard
XML parser lives by. When it sees the "format=binary" attribute, it has
to know how many bytes to read until the end of the element (because an
end tag open character could occur anywhere in binary data).

All this makes it possible to write dtds for /proc files, and to validate
test output from alpha and beta versions of /proc output routines with
validating XML parsers (ie a debug check for the rules of the file
structure). You can even add versioned dtd names and declare that at the
top of the file, if you want to get fancy with remote parsing of /proc
files on linux networks with multiple kernel versions installed.

This isn't as directly beneficial for snmp parsers as an ASN.1 encoding,
but it still gives them a standard to work from (a /proc file dtd version)
that makes it straightforward to reformat any interesting data in a /proc
file into ASN.1 for use in an snmp MIB (standard dsssl stylesheet job).

Finding out all of the details to do an ASN.1 encoding would require
some online archeology (it's kind of distributed, but the Z39.50
references have a few summaries around on the encoding and of course
there are numerous examples of ASN.1 in use in the snmp rfcs).

For the XML approach, the sgml-tools developers might have some
suggestions for /proc file dtds and any caveats to watch out for here
(maybe they think Docbook sgml is more appropriate, although I don't see
that sgml is going to be easier to parse without using nsgmls to parse it
than the considerably constrained subset in XML; also, the dsssl
stylesheets for docbook are more mature than babe_in_the_crib standards
like XSL).

But when you don't need to parse *any* dtd, the job gets a little simpler
than it is for a standard-conformant sgml parser like nsgmls.

The beauty of the enhanced XML approach is that the standards are entirely
public, it's a hatchling of W3C (no ISO document fees), and every bit of
it is available for public scrutiny by developers. The beauty of the ASN.1
approach to /proc file encoding would probably be most apparent to snmp
developers, but either way is an organized approach to data representation
in /proc files perhaps less ad hoc than what we have now.

Would do you see when you look at the filesystem? Depends on the tool.

cat /proc/encoded-file is going to show you the tagging, but the file is
still readable. Tools like procps would strip that out and format the data
in each it's own way. Looking at /proc with a web browser that called a
script to format the data would be an obvious thing to do as well.

Either of the ASN.1 or XML approaches would take time to implement. Having
it for 2.2 is probably a ludicrous expectation.

I like the idea of separating read-only data from modifiable data into
/proc and /syscfg filesystems, too. All of /proc can be read-only from
user space by definition. Why do we have /bin and /sbin? Same reason.

Regards, Clayton Weaver <mailto:cgweav@eskimo.com> (Seattle)

"Linux is not a backwater."



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:45    [W:0.621 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site