FRAGG PC

Defrag Benchmarks Used

Why defrag? The goal is efficiency. If you are defragmenting your computer for any other reason, then you've missed the point. But how do you measure and efficient computer? We're not talking about computation cycles. How do you measure responsiveness and hard drive efficiency? One method is to measure the time it takes to read a file. But which file?

Benchmark Software

The most obvious (but wrong) answer is to use a commercial PC performance benchmarking program, like PCMark Vantage from FutureMark. These programs are designed to compare one hardware configuration with another, and do not provide any basic ability to measure whether the computer is working better or worse depending on the file layout. If you are a PC gamer or want to compare the performance of workstation A against workstation B, these programs will run tests that put various capabilities of the system to its limits, allowing you to note that A has faster 3D graphics than B, but B has better multimedia capabilities than A. I have included some of these tests for completeness, even though they have little to do with drive efficiency.

File Access Timer

The next type of measurement utility measures the time it takes to read files. Obviously an efficient drive would provide the data in a manner faster than an inefficient one, but this also depends on what files are being requested, and in what sequence. It doesn't make sense to measure files that are hardly ever used. ReadFile and FileAccessTimer do a pretty good job of measuring read times of files. The latter was written by engineers at Raxco for testing their PerfectDisk software. I have also used the scan time of the Microsoft Security Essentials 1.0 Antivirus program, because it inspects the contents of many files. By measuring the time it takes to do a complete scan, you get a feel for the overall performance of the drive, not just specific files. Of course this penalises defrag programs that place some files in "faster" locations and others in "slower" locations, assuming that both classes of files are measured. That's why there are other benchmarks as well.

Measuring File Efficiency

While the file "read time" is easy to measure, how do you measure the time it takes to "open" a file? By that I mean the time it takes from when you double-click on a file and the time you can start editing the file in its native application. This required the development of some special test files and software. Visual Basic has a function called Timer(), which returns a number representing the number of seconds elapsed since midnight. This is available in Microsoft Office documents as well as VB, so I wrote a VB utility ("The Time") that noted the time using Timer, then asked Windows to open a file, and then once the file was open, displayed the time before and after, using Timer. In the case of Word and Excel I was able to verify the timing by getting the document to display the Timer() value once opened. Similarly with Access 97, except that the second Timer() value was later than the other measurement.

Timing Source Code

But how do you "open" a document under consistent conditions? That led to the creation of another VB utility ("DelayLaunch") which is opened when Windows starts up, and waits a specified number of seconds before doing something. In this case it used "TheTime" to open a Word or Excel test document. The test machine is set up to log in automatically, so the entire measurement process can be run simply by rebooting the machine. No operator error, and consistent test conditions each time.

Timing Result

Another way of measuring the efficiency of the file placement on the hard drive is to measure the time taken for the PC to reboot. How do you start a timer when the machine hasn't rebooted yet? The answer was to write a program that noted the time, asked windows to reboot, and then noted the time again once it was run. By putting this utility in the Windows startup folder, I could measure the time from "reboot" to "startup", and so the utility is called "RebootTimer".

Reboot Timer result

So now we can open files under consistent conditions, and time how long it takes for the PC to reboot for different defrag situations. But what about "Modifying" a file? How does fragmentation affect that? I took a program I had used to determine how well Access 97 was working in a network, and modified it to run the same benchmark locally. Essentially what it does is create a whole bunch of records in a table, and then modifies them in a random sequence. This causes the Access database to grow and it usually becomes fragmented in the process. The fragmented file then performs less efficiently, so the whole test takes longer.

Creating the Test System

It turned out to be a major challenge to set up a test system. Firstly, you have to install Windows, then Office, and all the benchmark programs, and a whole bunch of other items and utilities, and apply all the Windows updates and patches. By the time this is done, the system has quite a few fragmented system files, but no data. So I collected thousands of documents from various computers, along with movie files, audio books, music files and other stuff, to simulate a typically disorganised and overloaded home computer.

Next, the whole mess was backed up on a sector-by-sector basis to an external drive, using Acronis TrueImage Home edition. Their 2009 version didn't work correctly, so I reverted to version 11, build 8101. It can backup and restore on a sector-by sector basis. The benchmark computer now has 5 different test layouts:

  • Windows XP Professional 32-bit
  • Windows Vista Home Basic 32-bit
  • Windows Vista Home Basic 64-bit
  • Windows 7 Home Premium 32-bit
  • Windows 7 Home Premium 64-bit

I plan to add a server test machine, and run equivalent server and VM benchmarks. This test rig has been benchmarked without any defrag programs running, and so to test a given program all that is required is to install the program, allow it to do its magic, and then run the benchmarks again. Some of the reboot-related tests are run to "teach" the defrag program about the normal use of the machine. Not all defrag programs "learn" from this approach, but some do. This would give some programs an advantage in the test results, but a clever program that can work out how best to optimise a given system is more desirable than a dumb "one size fits all" one.

As it turns out, even with the sector-by-sector images of the reference system are difficult to control, because Windows is never static, and there are updates every month, if not sooner. So I have to disconnect the test machine from the internet when running benchmarks. This adds more uncertainty to the mix, because on the one hand we want to know how the program deals with updates, and on the other hand we want to compare results between different programs. I hope I have found a happy compromise.

Windows Startup

The technical specs of the test machine are:
Sahara DT825125-P601:
Sahara Micro ATX Case
Microsoft Vista Home Basic
Intel Pentium Dual Core E2200 2.2 Ghz CPU
Foxconn LGA 775 Motherboard (S+V+L)
1024MB DDR2 667 Memory (upgraded to 2048MB)
160GB SATA2 HDD (Samsung HD161HJ)
Samsung SATA DVD Writer
Internal 52-in-1 Card reader
Keyboard & Mouse
Speaker set & Internal Modem
Samsung Hard Drive

Resources and Downloads

Here are some of the resources used for this article:



blog comments powered by Disqus
free counters