SysStat Widget

Posted:
in macOS edited January 2014
Does anyone use this widget? I'm using SysStat Nano, which is really nice. However, on the "Uptime" panel, there is a "load average" that means absolutely nothing to me. Does anyone know what the three numbers represent? I emailed the compnay (iSlayer) that create the widget, but have yet to recieve a response...

Comments

  • Reply 1 of 7
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by acr4

    Does anyone use this widget? I'm using SysStat Nano, which is really nice. However, on the "Uptime" panel, there is a "load average" that means absolutely nothing to me. Does anyone know what the three numbers represent? I emailed the compnay (iSlayer) that create the widget, but have yet to recieve a response...



    Load average is straight from the Mach kernel - in a time-sliced multitasking OS, the way that different processes are given the CPU is that they are put into a queue, and run one after the other. The number of processes waiting in the queue, on average, over three recent time periods is the load average.



    The three numbers in the load average are the number of processes waiting in the queue in the last 1, 5, and 15 minutes.
  • Reply 2 of 7
    acr4acr4 Posts: 100member
    Then why are they not integer values? Currently I see: "0.39, 0.46, 0.26". Under extreme usage I have seen numbers as high as 1.3-something, but never an integer value. And by "time-sliced multitasking OS" do you mean multi-threaded?
  • Reply 3 of 7
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by acr4

    Then why are they not integer values? Currently I see: "0.39, 0.46, 0.26". Under extreme usage I have seen numbers as high as 1.3-something, but never an integer value. And by "time-sliced multitasking OS" do you mean multi-threaded?



    They are the AVERAGE number of processes (threads, yes) that were waiting in the run queue over those time periods (there is also some massaging arithmetic that is done to compensate for certain things important only to Ph. D.'s in Computer Science, heh heh).



    The Mach kernel takes the top process off the run queue, restores the state that the process was in when it blocked (waited) or was interrupted, sets a timer to interrupt Mach at 10 milliseconds, and finally sets the PC (program counter) to the instruction that was about to be executed next when the process was stopped. This makes the CPU start executing that code. The Mach kernel gets control back if



    - the 10 milliseconds expires

    - the thread stops to wait for I/O or another resource

    - another process with higher priority than the current one enters the queue



    A snapshot of the number of processes in the queue is taken periodically and those are averaged.



    If a low-priority process has not made it to the top of the queue in a certain amount of time, its priority is increased each time through the queue until it eventually runs.



    In a system with more than one processor or processor core, Mach looks at which processor core is the least busy before letting the next thread execute, and puts the thread on that processor.



    Deadlock or deadly embrace - process A holds resource AA and stops to wait for resource BB. Process B holds resource BB and stops to wait for resource AA..... kaboom. There is logic in Mach to see this situation and break the deadlock.
  • Reply 4 of 7
    acr4acr4 Posts: 100member
    Ok thanks. So basically those numbers are samples of the time-averaged percentages (where 1.0 == 100%) of the processor load as a function of threads/processes managed by the kernel (which should be all of the processes).



    Computer engineering is fun... I should have gone CompE instead of EE
  • Reply 5 of 7
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by acr4

    Ok thanks. So basically those numbers are samples of the time-averaged percentages (where 1.0 == 100%) of the processor load as a function of threads/processes managed by the kernel (which should be all of the processes).





    Umm.. well, not exactly. At any given instant there are running processes and runnable processes. Together these make up a number N of processes either running or runnable (runnable meaning not waiting for anything except the CPU).



    A snapshot of this number is taken every 5 seconds. So the snapshots might be 2, 0, 2, 1, 2, 3, 0, .....



    Averaged over 1 minute might be 1.26 - the 1 - minute load average.

    Quote:



    Computer engineering is fun... I should have gone CompE instead of EE



    Oh hell yes- I could never grok those circuits.
  • Reply 6 of 7
    acr4acr4 Posts: 100member
    Oh okay I understand now. so basically more than one process running or pending per sample would give a load average of more than one. (Meaning that a load average of one would mean that only one process was trying to run at any given time.)



    And circuits are easy, you just gotta step away from mesh and node equations and learn the shortcuts. Then its just superposition. Easy stuff! Circuits aren't my thing though. DSP is where the real fun comes in.
  • Reply 7 of 7
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by acr4

    Oh okay I understand now. so basically more than one process running or pending per sample would give a load average of more than one. (Meaning that a load average of one would mean that only one process was trying to run at any given time.)





    Yep, and the ideal load average is 1.0, meaning that:



    1. on average, there is no time that the processor is not running a process (average less than 1)

    2. on average, there are no processes being made to wait for the CPU (average greater than 1).
Sign In or Register to comment.