lkml.org 
[lkml]   [2000]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Proc fs limit workaround?
Ricky Beam wrote:
>
> On Thu, 14 Sep 2000, Nick Pollitt wrote:
> ...
> >And second, why is the 4K limit there in the first place?
>
> Primarily because it was never designed for 90% of the crap that's in there
> now. I have long hated the BS required to get more than 4k worth of stuff
> out of /proc. The way around the limit is not a solution, it's a hack.
> There's not atomicy for processing more than one page unless you go out
> of your way to deal with it. I've banged my head on the desk a few times
> because of this -- what happens when there's any delay between read()'s?
> *sigh*
>
> #ifdef RANT
> In case you haven't noticed a lot of present-day linux is a nice collection
> of hacks. This is the nature of code evilution -- I have to deal with this
> everyday (of course, I'm paid to.) procfs was actually a Very Good Thing(r)
> six or seven years ago when it was _designed_. Now look at it.
>
> I'm a perfectionist. I like things to be well planned, designed, and
> emplimented to do what they were designed to do. If you want it to do
> something else, return to the planning stage. For example, 747's weren't
> designed to clear cut forests. While they can be used for such a task,
> they are quite inefficient at it.
> #endif

I only disagree in one point - /proc was a bad design from the beginning
on, since
you could expect to get what you have now there!!!! (Actually please see
in archives those MANY MANY people on linux-kernel who where arguing
against
it before it ever got in...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.039 / U:46.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site