
On Thu, 18 Oct 2012, Russell Coker wrote:
On Thu, 18 Oct 2012, Peter Ross <Peter.Ross@bogen.in-berlin.de> wrote:
Well, if it would stay with /proc/$pid.. but even then, in the Linux way at least (sorry, last look at Plan9 was too long ago to remember) it isn't very efficient to open a dozen files in a directory to get all process relevant information.
http://en.wikipedia.org/wiki/Sysctl#Performance_considerations
describes the dilemma.
I tested the example given of running "top" and holding down the space-bar. That gave about 30% system CPU across both cores of my system, IE 60% of one core. Of course that included the X overhead, presumably less CPU time would have been used for a virtual console.
I did it on both (X and text console), the result is ca. 25% in both cases. A FreeBSD system I pushed to ca. 1.6% CPU time that way (note: they are not exactly comparable systems but not the way apart that I would expect a 1:15 result. They have both 4GB RAM and the FreeBSD system runs 125 processes, the Linux 165).
Also any test that involves polling something at maximum speed can use 100% of a CPU core. The issue is does some task that is useful take so much CPU time? Opening files that are entirely RAM based is still a reasonably fast operation.
My /proc has ca. 165 process directories with 43 entries in each. Regards Peter