Search Text         
Search Tips?
Search By   And   Or   Boolean   Exact Match   TA #
Search In   Whole Doc   Keywords Sort By  
Product   Sub Product  

View Technical Articles (sorted by Product) New/Updated in the last:    7 days      14 days      30 days             
TA # Date Created Date Updated Resolved Issue?   Printer Friendly Version of This TA   Print Article
  E-mail This TA   E-mail Article
116222 12/10/2001 01:38 PM 08/05/2011 07:57 AM
Yes No
How do I quickly determine the cause of a performance problem using sar?
Keywords
openserver 5.0.0 5.0.2 5.0.4 5.0.5 5.0.6 5.0 performance problem troubleshoot bottleneck sar slow system guide activity memory CPU time wio usr bottleneck uw7 unixware unixware7 uw ou ou8 openunix openunix8 711 7.1.1 sco 713 7.1.3 analysis analyse 714 7.1.4 osr6 openserver6 osr5 600 6.0.0 troubleshooting rtpm monitor benchmark benchmarking monitoring sysinfo trouble troubleshoot kernel tuning differences diff difference hog cpuhog iohog memhog lsof truss ipcs streams WCHAN prfsnap prfpr
Release
          SCO OpenServer Release 6.0.0 
          SCO OpenServer Enterprise System Release 5.0.5, 5.0.6, 5.0.7 
          SCO OpenServer Desktop System Release 5.0.5, 5.0.6, 5.0.7 
          SCO OpenServer Enterprise System Release 5.0.2, 5.0.4 
          SCO OpenServer Desktop System Release 5.0.2, 5.0.4 
          UnixWare 7 Release 7.1.1, 7.1.3, 7.1.4 
          SCO Open UNIX Release 8.0.0 
Problem
          My system's behavior is slow.  How do I quickly determine the most
	  likely cause of the bottleneck?

          For OpenServer5/6, ensure "sar" is enabled with:
 
              # /usr/lib/sa/sar_enable -y

          Unixware automatically runs sar by default. 



Solution
          The following is a two-step guide to quickly finding the bottleneck
	  on a system with slow performance.

Note:     Running two sar commands as a "before and after" test, (once before
	  the system slowdown and once during the system slowdown), will make
	  it easier to see differences in sar data.

          Characters after the '#' sign indicate commands to be run.

To review existing "sar" data from a previous day or file then:

# cd /var/adm/sa

# sar -A -f sa<day>


STEP 1
------
Check the general system activity:

# sar 1 5
09:35:13    %usr    %sys    %wio   %idle (-u)
09:35:14      17       0       0      83
09:35:15       5       0       0      95
09:35:16       5       0       0      95
09:35:17       5       0       0      95
09:35:18       5       1       0      94

Average        7       0       0      92

This command should always be run first to get a general idea of the primary
location of the bottleneck.


- If %usr is high, then the system is waiting for user commands to finish
(i.e. sort, data gathering/processing programs).

CAUSES: Non-interactive programs running unnecessarily at peak hours,
slow CPU, not enough CPUs, unnecessary programs running, inefficient
third party progams running, daemons processing data, bad nice values.


- If %sys is high, then the system is waiting for kernel driver calls to
complete (i.e. hardware issues, spurious interrupts, third party drivers).

CAUSES: Inefficient third party drivers, bad hardware causing spurious
interrupts, slow CPU, not enough CPUs.


- If BOTH %usr and %sys are high, then the system is waiting for all types
of system calls, whether they are user generated or kernel generated.

CAUSES: Slow CPU, not enough CPUs.


- If %wio is high, the system is waiting for the disk subsystem to retrieve data.

CAUSES: Not enough disk cache (NBUF/NHBUF), slow hard drive system, not
enough memory, memory leak from process, process grabbing too much memory.

Check the basic disk performance speed with, a more advanced test would be to
use more random writes to the disk, say with a creating and unpacking a large
"cpio" archive or to use a benchmarking tool.

For example, this will generate a 1GB file:

# timex dd if=/dev/zero of=big bs=1024k count=1000

A more detailed test file is shown below.

STEP 2
------
- IF %usr IS HIGH:

1) Check for processes consuming too much CPU time:

# ps -el | more
  F S    UID   PID  PPID  C PRI NI     ADDR   SZ  TTY       TIME CMD
 71 S      0     0     0  0  95 20 fb117000    0    ?   00:00:01 sched
 20 S      0     1     0  0  66 20 fb117158  148    ?   00:00:00 init
.
.
.
 20 S      0   347     1  0  76 24 fb119db0  312    ?   00:00:00 snmpd
 20 S     17   349     1  1  66 20 fb119f08  156    ?   01:05:53 deliver
 20 S      0   413   410  0  75 20 fb11a060  128    ?   00:00:00 lockd

(WCHAN removed to fit in screen)

Check the C and TIME values. If the TIME value (minutes:seconds:100ths of
a second) is unusually high and C is positive for a specific process, then
that process could be taxing the system.  In the above example, the deliver
daemon is processing leftover admin mail.  The mail can be removed.

NOTE:
         WCHAN (l)
          The address of an event for which the process is sleeping or in
          SXBRK state; if blank, the process is running. For an
          individual LWP if -L is specified.

You can also use "who" to check on the number of idle processes clocking up
CPU time:

# w
   4:41pm  up  5:04,  3 users,  load average: 0.00 0.00 0.00
User     tty            login@   idle    JCPU    PCPU  what
root     tty01         11:51am          21:38          bash
root     tty03          4:31pm      9                  -sh
root     ttyp1          1:55pm                         w

# who -u | sort -k 6 -r
root       tty03        Jul  6 16:31  0:09   1774
root       ttyp1        Jul  6 13:55  0:01   3902
root       tty01        Jul  6 11:51   .     1773


2) Check for system call activity:

# sar -c 1 5
SCO_SV tuvok 3.2v5.0.5 i80386    06/21/2001

09:55:08 scall/s sread/s swrit/s  fork/s  exec/s  rchar/s  wchar/s (-c)
09:55:09    1216      67      12    0.99    0.99   178441     3988
09:55:10     147      31       6    0.00    0.00   168723     8421
09:55:11      74      27       4    0.00    0.00   163644     3342
09:55:12     245      37       6    0.00    0.00   171821     8928
09:55:13     151      29       4    0.00    0.00   163770     3468

Average      367      38       6    0.20    0.20   169280     5629


Check before and after for system calls, forks, execs., etc.  If system
calls are high, it indicates one or more of the following:

   - Programs are suddenly being used more actively.
   - More programs in general are being run on the system.

Use ps to check to see if this is necessary. If forks/execs, or reads/writes
are specifically high, check for programs that may be calling specific calls.


- IF %sys IS HIGH:

If you have a muliprocessing system, run the following command to see if
any device is sending thousands of interrupts to slow down the CPU.

1) For OpenServer5:

#sar -j 1 5

For UnixWare7/Open UNIX 8/OSR6:

#sar -P ALL 1 5

Check programs that could be accessing the tape drive, third party smart
boards, or other non-disk drivers.


- IF BOTH USR AND SYS ARE HIGH:

1) Check the run queue, for swapping:

# sar -q 1 5

SCO_SV lunasco 3.2v5.0.4 Pentium    06/21/2001

10:46:29 runq-sz %runocc swpq-sz %swpocc (-q)
10:46:30     3.0     100
10:46:31
10:46:32     1.0     100
10:46:33     1.0     100
10:46:34     1.0     100

Average      1.5     100

Normally the Average time should be less than 3 on a taxed system. If the
average time is constantly higher than that, the processes are not being
serviced quick enough; the CPU could be to blame.  Either increase the CPU
speed or add CPUs, if possible.


- IF %wio IS HIGH:

1) Check for greedy processes:

# ps -el | more
  F S    UID   PID  PPID  C PRI NI     ADDR   SZ  TTY        TIME CMD
 71 S      0     0     0  0  95 20 fb117000    0    ?    00:00:01 sched
 20 S      0     1     0  0  66 20 fb117158  148    ?    00:00:00 init
 71 S      0     2     0  0  95 20 fb1172b0    0    ?    00:00:00 vhand
 71 S      0     3     0  0  95 20 fb117408    0    ?    00:00:16 bdflush
 71 S      0     4     0  0  95 20 fb117560    0    ?    00:00:00 kmdaemon
 71 S      0     5     1  0  95 20 fb1176b8    0    ?    00:00:18 htepi_daemon
.
.
.
 20 S      0   252     1  0  76 20 fb118830  152    ?    00:00:00 cron
 20 S      0   354     1  0  76 24 fb118988 233504    ?    00:00:03 report
 20 S      0   496     1  0  76 24 fb118ae0  200    ?    00:00:00 calserver

Check the SZ value to see if a process is either grabbing too much memory or
not freeing it up when needed.  In the above example, the "report" program is
grabbing a lot of memory.


2) Check the amount of useable memory:

# sar -r 1 5

SCO_SV tuvok 3.2v5.0.5 i80386    06/21/2001

10:34:13   freemem   freeswp availrmem availsmem (-r)
10:34:14      8262    389120     28765     56421
10:34:15      8262    389120     28765     56421
10:34:16      8262    389120     28765     56421
10:34:17      8262    389120     28765     56421
10:34:18      8262    389120     28765     56421

Average       8262    389120     28765     56421

If freemem (listed in 4K pages) is below 500 and freeswap is dynamic, the
system is paging in data because it can't fit what it needs in memory.
If this is happening all the time, increase RAM.

Pleae note:  In OpenServer sar -r reports the amount of swap space on disk.
In UnixWare7/Open UNIX 8, it reports the swap space in virtual memory
(RAM plus swap).


3) Check the disk i/o caching usage:

# sar -b 1 5

SCO_SV tuvok 3.2v5.0.5 i80386    06/21/2001

10:37:17 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s (-b)
10:37:18       0       0       0       0       0       0       0       0
10:37:19       0       0       0       0       0       0       0       0
10:37:20       0      60     100       0       1     100       0       0
10:37:21       0      -1     100       0       0       0       0       0
10:37:22       0      56     100       0       1     100       0       0

Average        0      24     100       0       0     100       0       0

If %rcache is continuously < 85 and/or %wcache is < 80, then the system is
having to go to the hard drive to load the disk cache. Increase the disk
cache by increasing NBUF by 50 percent and adjusting NHBUF appropriately.

For UnixWare7/Open UNIX 8.0.0, there are the additional kernel tunables:

"FDFLUSHR", this is the interval in seconds to check the need to write the
buffer cache and file pages to disk.  The default is 1.

"NAUTOUP", this is the humber of seconds between filesystems updates.  The
default is 60 seconds.  Increasing NAUTOUP can improve performance, but also
increase the risk of data loss should a system crash occur.

4)  Check the actual disk i/o with:

sar -d  

A %busy figure averaging > 50% can indicate a disk bottleneck.  

The "avserv" column shows service tume after a request has arrived at the disk.  

The "await" column shows the average wait time for an I/O request to be serviced. 

The "avque" column shows the average length of the wait queue for an I/O request.

# sar -d
UnixWare omega 5 7.1.3 i386    02/09/05

00:00:00 device         MB       %busy   avque   r+w/s  blks/s  avwait  avserv 
10:40:01 c0b0t0d0s1     12867       90    23.7       8      83  2556.6   112.8
10:40:01 c0b0t0d0s10    50           0     1.0       0       0     0.0   790.0
10:40:01 c0b0t0d0       17359       90    23.7       8      83  2556.4   112.8
10:40:01 c0b0t1d0s12    65458        1     1.6       2      21     4.1     6.9
10:40:01 c0b0t1d0       69459        1     1.6       2      21     4.1     6.9
10:40:01 c0b0t2d0s1     138918       2     3.2       4      86    13.3     6.1
10:40:01 c0b0t2d0       138919       2     3.2       4      86    13.3     6.1

Here with %busy and no process consuming memory there is a massive amount of 
IO occuring on the root disk.  If this is a RAID set, is the logical volume 
recovering from a disk failure in the set.  For example, for the HP CISS SMART 
ARRAY contoller, visit /usr/bin/compaq/bin/diags/ciss_menu and look at the 
status of each logical volume.

If the disk performance appears to be acceptable, but you have unacceptable
levels of CPU time devoted to waiting for I/O, then you may have a memory
bottleneck.

5)  Check the use of file access system routines for disk fragmentation issues with:

sar -a

         Report use of file access system routines (per second):

        iget/s
               number of S5, SFS, VXFS, BFS, and UFS files located by
               inode entry
        namei/s
               number of filesystem path searches
        dirbk/s
               number of S5 directory block reads issued
        %dnlc
               hit rate of directory name lookup cache

          If -R is specified then %dnlc is replaced by dnlc-hits and
          dnlc-miss, the counts of cache hits and misses.

slab6(7.1.4)# sar -a

UnixWare slab6 5 7.1.4 i386    07/16/09

00:00:00    iget/s  namei/s  dirbk/s    %dnlc
01:00:00         8       32        8       91
02:00:00         8       32        8       91
03:00:00         8       32        8       91
04:00:00         8       32        8       91
05:00:00         8       32        8       91
06:00:00        14       52       20       89
Average          9       35       10       90

See http://www.sco.com/ta/112921

Please Note: Make sure you do not increase NBUF by so much that you run out of
             regular general purpose memory (check with sar -r).

If neither memory nor the disk cache is a problem, check the disk i/o
system as you may need a RAID system or a faster host adapter system.

For OpenServer5 see:

http://osr507doc.sco.com/en/PERFORM/tuning_IO_rsc.html

For OpenServer6/UnixWare7 see:

'rtpm' and check the IO Stats, rdblk and wrblk to determine read and write 
bottlenecks.

Best displayed on the console or a graphical interface which shows high usage in RED.

Here's an example where there are a large number of TCP/IP failures and high CPU 
(%sys) usage:

1) netstat -s in the sysinfo shows some errors, mainly icmp.
Within rtpm if you use the arrow keys to highlight "TCP/IP:"
and hit enter you will see details on exactly what types
or error are being encountered. For example, are you are getting 
ICMP destination unreachable errors and this can be confirmed
by hitting enter with "ICMP:" highlighted?

2) The vflts value indicates the number of times a second 
the CPU failed to perform an address translation. This is an
indication of how busy the CPU or CPU group is.

3) The CALLS/s value is an indication of the number of system
calls a second being made by the applications running on the
server. The higher this value the busier the server is. You can
highligh CALLS/s: and hit enter to see the types of call being
made. For example, you might be seeing a lot of read or 
write calls given that your users are using Samba and a Database.

NOTE:
      In general, ensure that the server has the latest patches, host bus
      adaptors and network drivers are installed available from:

      http://www.sco.com/support/download.html


NOTE:
      There are other tools available to assist in identifying the processes
      and analysing the output of sar.

      These are:

      Check the Message Queues using:

          # sar -m

          # ipcs -qop

          Technical Article 105414, "How do I find out the open files and stack trace used by a hanging process? "

      Check the Streams Buffers using:

          # netstat -m

          Technical Article 116684, "OpenServer 5, How to Debug STREAMS failures."

      List of Open Files utility, lsof, available from:

          http://www.sco.com/skunkware
     

      Files in /var

          http://uw714doc.sco.com/en/SM_concepts/_Files_in_var.html

      For OpenServer5/OpenServer6 and UnixWare7:

          "top" available from Skunkware at http://www.sco.com/skunkware

      TCP/IP Monitoring:

          "inconfig"

          See /etc/inet/inet.dfl or /etc/default/inet for OpenServer5

          # inconfig arpprintfs 10
          # inconfig igmpprintfs 10
          # inconfig ipprintfs 10
          # inconfig icmpprintfs 10
          # inconfig tcpprintfs 10
          # inconfig udpprintfs 10

          For OpenServer5 only you can also add:

          # inconfig mbclprintfs 10
          # inconfig nbprintfs 10
          
          This can increase the debug level for these protocols and should
          generate additional debug messages in your /var/adm/syslog.

          They can be turned off with the 0 parameter.

          The verbosity increases with the value:

          0=none, 1=minimal, 10=most verbose

          The debug messages will have error codes and a reverse hex
          representation of the IP address where the bad packet originated.

          For example you may see:

          NOTICE:icmp_error(F08296E4,3,3)

          In this case the IP associated with F08296E4 is translated by 
          breaking it into hex pairs and reversing the 4 octets.

          F0 = 240
          82 = 130
          96 = 150
          E4 = 228

          So this is from the IP 228.150.130.240..

          In the file /usr/include/netinet/ip_icmp.h  we can translate the 3, 3
          part of the message to mean:

          #define ICMP_UNREACH            3       /* dest unreachable, codes: */
          #define ICMP_UNREACH_PORT       3       /* bad port */

          Meaning that the packet was destined to a non-existent port or to a 
          port which did not have any services.

          To get further information about these packets might require using a
          sniffer to see what was actually in the packet and where it was 
          destined.


      Monitoring and tuning the system:

          OpenServer5:
          -----------
          - http://osr507doc.sco.com/en/PERFORM/autotune_desc.html

          - http://osr507doc.sco.com/en/PERFORM/kernel_configure.html

          - SkunkWare Products:

           "hog" package available from Skunkware, above, these include
           "memhog", "iohog" and "cpuhog".

           SarCheck available for OSR5 from:

               ftp://ftp2.sco.com/pub/skunkware/osr5/sysadmin/SarCheck/

           "u386mon" for OSR5 available from Skunkware from:

               http://www.sco.com/skunkware/faq.html#u386



          UnixWare7/OpenServer6:
          ---------------------
          - http://uw714doc.sco.com/en/SM_perform/CONTENTS.html

          - http://uw714doc.sco.com/en/FEATS/new_features_710_performance.html

          - Real Time Performance Monitor for LWP Processes:

            rtpm(1M) for UnixWare7/Open UNIX 8.0.0/OpenServer6
        
             - http://uw714doc.sco.com/en/SM_perform/_Real-Time_Performance_Monitor.html

             - http://uw714doc.sco.com/en/man/html.1M/rtpm.1M.html
  
          - The System Monitor:

            http://uw714doc.sco.com/en/SM_perform/_System_Monitor.html

          - Summary of system administration tasks:

            http://uw714doc.sco.com/en/HANDBOOK/saT.tasksummary.html


      AIM Benchmarking:

          This can be obained from: 
   
             http://sourceforge.net/projects/aimbench        
  
          Download the source and compile using the "make" file supplied.
          A "cc" compiler from the Development System will be required.
          Compile with the "-lsocket -lnsl" linker options.
                     
          For OpenServer5, edit disk1.c to comment out the following 
          definition:
  
          To:
 
          /* #define _M_XOUT                       so O_SYNC can be used on 
                                                   Acer/Altos */

       sysinfo:

       The latest sysinfo see: http://www.sco.com/support/sysinfo.html

       Once installed, use: /usr/lib/sysinfo.d/bin/stune_verify to check the
       values in your "stune" file do not exceed the maximum or are below the
       default recommended values.

       RAID Levels and their impact: (for %wio):

       RAID LVL	        Striping	Mirroring	Error Correction
       RAID0	        Block Level	   No	        n/a
       RAID1	        n/a	           Yes	        n/a
       RAID2	        Bit Level	   No           Hammering ECC
       RAID3	        Byte Level	   No	        Dedicated Parity
       RAID4	        Block Level	   No	        Dedicated Parity
       RAID5	        Block Level	   No	        Distributed Parity
       RAID6	        Block Level	   No	        Dual Distributed Parity
       RAID7	        Cached		   No           Dedicated Parity
       RAID0+1	        Block Level	   Yes	        n/a
       RAID0+3	        Block & Byte Level No		Dedicated Parity
       RAID0+5	        Block Level        No           Distributed Parity
       RAID1+5	        Block Level	   No	        Distributed Parity
       RAID1+0	        Block Level	   Yes	        n/a
       RAID3+0	        Block & Byte Level No		Dedicated Parity
       RAID5+0	        Block Level	   No	        Distributed Parity
       RAID5+1	        Block Level	   No	        Distributed Parity

       RAID LVL	Read Speed Write Speed	Fault Tolerance	Market Acept'nc Cost
       RAID0	    5	        5	        1	      5	        Lowest
       RAID1	    3	        3	        5	      5	        High
       RAID2	    1	        1	        3	      1	        Highest
       RAID3	    3	        1	        3	      5	        Medium
       RAID4	    5	        1	        3	      1	        Medium
       RAID5	    5	        1	        3	      5	        Low
       RAID6	    5	        1	        5	      1	        High
       RAID7	    5	        5	        3	      3	        Highest
       RAID0+1	    5	        5	        5	      5	        High
       RAID0+3	    5	        1	        3	      1	        High
       RAID0+5	    5	        3	        3	      1	        High
       RAID1+5	    5	        3	        5	      1	        Highest
       RAID1+0	    5	        5	        5	      5	        High
       RAID3+0	    5	        1	        3	      1	        High
       RAID5+0	    5	        3	        3	      1	        High
       RAID5+1	    5	        3	        5	      1	        Highest

       Marks out of 5.

       RAID LVL	Application
       RAID0	Non Critical Data requiring high speed and low cost
       RAID1	High Fault Tolerance
       RAID2	Rarely Used
       RAID3	Large file applications requiring high speed and redundancy
       RAID4	A compromise between RAID3 and RAID5
       RAID5	General purpose, relational database & ERP applications
       RAID6	Similar to RAID5 with additional fault tolerance
       RAID7	Specialised high end applications
       RAID0+1	High Performance and Reliability
       RAID0+3	Higher Performance than RAID3 for large file applications
       RAID0+5	Increased Capacity over RAID5
       RAID1+5	Very High Fault Tolerance
       RAID1+0	High Performance and Reliability
       RAID3+0	Higher Performance than RAID3 for large file applications
       RAID5+0	Increased Capacity over RAID5
       RAID5+1	Very High Fault Tolerance

NOTES:
       IT IS RECOMMENDED TO ANALYSE THE sa DATA FILES ON THE SAME OPERATING
       SYSTEM AS THE ORIGINAL.

       eg. Do not take a "sa" sar data file from OSR6 and analyse it on a 
           UW714 server because the results will not be correct.

       Also, when taking the data for analysis between different timezones then
       this can also result in missing data around the midnight mark where the
       timezones differ.

       It is recommended to simply use:

       # cd /var/adm/sa
       # sar -A -f sa<day> > /tmp/sa<day>.txt

       and examine the /tmp/sa<day>.txt file.


TROUBLESHOOTING:
---------------
Performance Tuning:

- Identify bottleneck

sar, Rtpm (OSR6/UW7), (prfstat, prof, lprof)
CPU performance

sar -u
00:00:00    %usr   %sys   %wio  %idle  %intr
00:00:01        30       10       10       46       4

high usr, investigate with truss, prof
high sys, intr, investigate with prfstat
high wio, storage throughput


Storage Performance:

- Hardware configuration

  Device topology:

     don't connect slow devices and fast devices on the same bus 
     e.g. put your slow tape drive on a separate controller

  Cabling:

     ensure your cables are up to specifications

  Hardware RAID:

     performance RAID 0 vs integrity RAID 1 RAID 5

  Filesystem tuning:

     fsadm, block size, increase logsize (@ mkfs only)
     mount options; tmplog


Memory:

 Avoid swapping
 DEDICATED_MEMORY, use if using shared memory
   mkdev dedicated
   Dedicated memory reserves physical
   Saves kernel virtual
   Reduces paging, uses large mappings (PSE)
 SEGKMEM_PSE_BYTES
 Add more memory!


Tuning for largefile support:

 HDATLIM, SDATLIM, HVMMLIM, SVMMLIM, HFSZLIM, SFSZLIM set to 0x7fffffff  
  (unlimited)
 /etc/conf/bin/idbuild -B && init 6
 fsadm /mountpoint or raw device
 fsadm -o largefiles /
 OSR6 defaults to largefiles, UW7 does not

Building large file aware applications:

  -D_FILE_OFFSET_BITS=64


NOTE:
         To assist in investigating a process using 'truss' try using truss

         # truss -o /tmp/binary.truss.$$ -aef -wall -rall -vall <command>

         Alternatively, try a wrapper script?, eg:

         # mv binary binary.orig

         Then use a script like this saved as the original binary name.

         :
         truss -o /tmp/binary.truss.$$ -aef -wall -rall -vall binary.orig $*


NOTE:
         To gather statistics on a 'sar' you can run it manually with:

         # sar -A -o <output file>

         and analyse with, for example:

         # sar -u -f <output file>

         A record of output files is stored in /usr/adm/sa

         A script to automatically gather performance statistics can be found
         here:

./sar_check 10

----- cut here -----

#!/bin/sh 

# 1 minute
period=${1:-"60"} 

# For example 17 hours worth
iterations=${2:-"`expr 86400 / $period`"} 


echo "sar: $iterations iterations at $period second intervals" 
/usr/sbin/sar -A -o /local/sar/sa`date +%d` $period $iterations > /dev/null

----- cut here -----

          This assumes /local/sar exists.

          It can also be run in conjunction with the Server Certification Tests
          to give performance information, from:

          http://www.sco.com/developers/hdk/testsuites/

NOTE:
          Other simple tasks to save on resource would be to run on all SCO
          Unix's:

          # scologin disable

          and for OpenServer5 and OpenServer6:

          Turn off all of the unneccessary tty's running on the console,
          if you like, to save on memory:

                echo "Disabling tty05...tty12 ..."
                for i in 05 06 07 08 09 10 11 12
                do
                        echo $i
                        disable tty${i}
                done

          Lastly, you may wish to disable the un-used Calendar Server (not OMS 
          related) with:

          # cd /etc/rc2.d
          # ./P95calserver stop
          # mv P95calserver p95calserver

NOTE:
          Troubleshooting Test file (test.sh):

----- Start to cut here -----

echo "Turn Kernel Profiling on ... "
prfstat on
echo "Empty Statistic Files ... "
rm -f prfsnap.out
rm -f sadc.out
echo "Take a Kernel Snapshot ... "
prfsnap prfsnap.out
echo "Collect 5 mins of Sar info ... "
/usr/lib/sa/sadc 5 60 sadc.out &
dfspace
echo "Generate a count=2000 (2GB) file or count=10000 (10GB) file if Large files are  
supported ... "
timex dd if=/dev/zero of=testfile bs=1048576 count=2000
dfspace
echo "Remove Test File ... "
rm testfile
dfspace
echo "Take a Kernel Snapshot ... "
prfsnap prfsnap.out
echo "Turn Kernel Profiling off ... "
prfstat off
echo "Example Kernel Profile Analysis ... "
prfpr -PALL prfsnap.out 0
echo "Example Sar Analysis ... "
sar -f sadc.out

----- Finish Cut Here -----

SEE ALSO:
          sar(ADM), vmstat(C), idtune(ADM)

      Differences in Kernel Tuning between OpenServer5 and OpenServer6/
      UnixWare7 see:

      http://osr600doc.sco.com/en/SM_perform/osr507kerntuns.html

      Technical Article 117424, How can I analyze "on-the-spot" performance data with 
                SarCheck?"

      Technical Article 114622, "How do I install and run SarCheck?"

      http://www.aplawrence.com/Unixart/slow.html
Back to Search ResultsBack to Search Results