Hi Guys,

Just a little project I am considering starting.

Basically, we covered quite a lot of interesting server analysis formulae in college. Yes, everyone loves calculating formulae!

I was thinking about pulling all these formulae together into a analysis tool. Maybe, this would be off interest to other network admins. I know, I know.. I already got a heap load of stuff to monitor; so this would have to be effortless! It would involve simply dumping a log file to the app eg. mpstat data

#1 Example Scenerio
Suppose that during an observation period of 2 minute, a
single-CPU is observed to be busy for 40 seconds. A total of 2000
transactions are observed to have arrived in the system. The total
number of observed completions is also 2000 transactions. What
is the performance of the system?

#2 Example Scenario
A Web server is monitored for 10 minutes and its CPU is observed
to be busy 90% of the monitoring period. The Web server log
reveals that 30,000 requests are processed in that interval. What is
the CPU service demand of requests to the Web server?

Performance can classified / analysed using
>>>Mean service time per transaction
>>>CPU utilisation
>>>System throughput
>>>Upper asymptotic bounds under heavy load conditions on throughput
>>>Upper asymptotic bounds under light load conditions on throughput
>>>bounds on performance
.... </list>

The following known laws could be implemented
>> Utilisation Law
>> Service Demand Law
>> Forced Flow Law
>> Little's law
>> Interactive response time law
>> Amdahls law
....</list>

*basically an analysis tool that will examine an observed log for a time period and quantify performance models and typical bounds on performance.. bottle neck resources in the system etc..

Does anyone think that this tool would be useful?