System Performance

Document path Reference Contents > Manager Controls > System Performance

This window is opened by the File -- System Manager -- System Timing Test menu function and provides you with tools for regularly monitoring the performance of the system and each terminal on it.

There are two tests available. The tranditional is a simple overall performance test. The advanced is for large datafiles where there are internal file slots with more than 10,000 records. The advanced test is more sophisticated and seeks to find an average randomised record access performance.

The window has two tab panes.

Traditional Timing Test

Advanced Data Access Performance Test

Traditional Timing Test

To run a test, click on the button. The test may take some time. The test is in two parts that run concurrently. All times are in seconds.

Firstly, 50 records of each of the major master files are accessed and timings measured. The results are displayed in the Data access performance box. Because of the DBMS index cacheing system, if you repeat the test, the results may improve, especially when operating in multi-user mode. You should therefore only ever test once to get consistent and comparable results.

Secondly, four CPU/RAM processes are timed. The results of this are displayed in the Terminal processing performance box.

In normal use, the general performance of the system is affected by a mixture of internal terminal CPU speed and external disk access performance. Drawing to the screen, list activity and calculations are all terminal activities, whereas reading and writing data is a disk activity, and in multi-user situations subject to network activity and quality.

The critical measure of data access performance is the average time. This is the average time to access a single record of the tested file. For some files, in practice, records from related files also need to be read but this testing does not measure this. The test results can be regarded as the average time to read a random record. There are many factors that effect the average time for a record access. Data file fragmentation, disk performance, server software and machine speed, network quality, throughput rate and traffic, and also the average size of a record, particularly the extent of text is stored. For single-user systems the extent of disk cache used may also be significant. Terminal performance is simply related to the type of machine in use.

Tip TIP: It is recommended that the test is run routinely, say once a month, and always on the same terminal type. Print and keep the results and use that to monitor trends. Adopting this policy will give good warning of any need to improve the network/server configuration, as activity grows on the system.



Perform Test Run

The results can be printed, using this button, for future reference and comparison. You will be presented with the Systems Test Results Report window in which comments about the test circumstances can be entered. These comments will be printed on the report. The printed report gives a full list of the number of records in each of the files. An Activity Log record is saved when this report is printed.

Print Results

To print a report of the test and a listing of the number of records in every file within the system.

Back to top

Advanced Data Access Performance Test

When you switch to this tab pane the system looks at the datafile and chooses file slots which have more than 10,000 records only. Of those it can only choose certain ones that it is designed to test. If you have other large data slots, they will be listed in a message. The resulting files will be tested on one or more index, as shown in the list. The first and last record index value is shown in the list.

When you operate the Perform Advanced Test button, the process first collects together a randomised list of values for the indexes. In date and number indexes these values are mathmatically derived from within the range limits of the index. For parts, customers, suppliers and G/L accounts a randomised list of actual values is collected.

These randomised lists provide the best approach to test the ability of the system to access records in a deliberatly hapazzard manner and so simulate jumping arround the database, as in normal day-to-day operation.

Tip TIP: Look at the first and last values as there may be extreems that distort the random nature of the values used. To ensure that extreems are ignored the date range is limited to a first of 1 JAN 2000 and a last of today's date. Numbers are a minimum of 1.

After the generation of randomised lists the process then works through the files and indexes finding 1000 records based on the random values. Firstly a simple find scan is perfromed and then a scan with a join to a related record. This process is carried out twice.

The first pass produces timings for Simple 1 and Join 1, the second for Simple 2 and Join 2.

The Simple 1 is always largest as on this pass no file data is cached by the operating system and all has to be extracted from disk. Typically the Join 1 is faster because the data is coming from memory rather than disk and progressivly does so. Normally it will race through the second pass to produce Simple 2 and Join 2 values for the same reason.

Tip TIP: Because of data file and index caching it is a good idea to perform this test before any other activity in a Caliach Vision session. If you have already been working with data some will have alredy been cached and this will artificially improve the measument of performance.

The result is the average time in seconds it takes to perform the 4 scan operations on an index. This is equivalent to 6000 find operations on random records throughout the file.

After running the test, print the list of results using the list cotext menu.

Tip TIP: Poor performance may mean that the datafile is fragmented on the disk (broken into many small parts spred over the surface of the hard disk which means the read heads have to fly arround excessivly). This can be improved by disk utility programs. Alternatively or additionally it may mean that data is fragmented within the datafile structure itself. This internal fragmentation inevitably happens over time and can be significant especially if you have multiple datafile segments. The only way to improve this is to run the DataFix Utility and create a new datafile. This process extracts all data from an existing datafile and copies it in a methodical way into the new empty datafile. The methodical populating process means that all file slot data and indexes are packed together so leading to optimum disk access. To be absolutly certain carry out the DataFix first then use a Disk Utility to defragment the hard disk.



Perform Advanced Test

To perform and advanced data performance test. It will pass through the list twice.

Back to top

See also: -

Compiled in Program Version 3.10. Help data last modified 29 MAY 2010 06:25. Class wSysPerform last modified 29 MAY 2010 04:09:27.

Document path Reference Contents > Manager Controls > System Performance