GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
MST-Bench(1) FreeBSD General Commands Manual MST-Bench(1)

MST-Bench - Maximum Sustainable Throughput Benchmark

mst-bench [--help] [--trials N] [--file-size GiB]

--help
Print a basic help message

--trials N
Run N trials and report the average. The default is 3.

--file-size GiB
Use a file of GiB gibibytes for the disk tests. The default is 2 * physical memory size, which is generally enough to overwhelm disk buffering and produce a reasonable estimate of sustainable throughput. However, systems with low physical memory may benefit from large disk cache on the controller which may result in an overestimate of disk speed. In those cases, specifying a larger file system may produce more realistic results.

MST-Bench is a simple program for reporting the maximum sustainable memory and disk throughput one could expect from an optimal program. It does not attempt to report hardware transfer rate (which is not generally useful) or estimate performance of real programs.

The goal is to provide a theoretical maximum to which real programs can be compared. For example, if MST-Bench reports 10 seconds to sequentially read the generated file, then we know that 10 seconds is the theoretical best time that any program can achieve while streaming this file. If fgrep string bench.tmpfile takes 15 seconds, then it is achieving about 2/3 of the maximum theoretical disk throughput. We might then investigate whether fgrep's CPU bottleneck can be reduced to the point where it can keep up with the disk input. Once fgrep is running clone to 10 seconds for this file, we know that it cannot be sped up much further.

MST-Bench estimates maximum sustainable throughput for memory and disk under typical circumstances.

It runs the following speed tests:

Sequential memory access for a small array that should result in a high cache hit ratio, where most accesses are satisfied by the cache.

Sequential memory access for a large array that should result in a low cache hit ratio, where many accesses are not satisfied by the cache.

Sequential disk write of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware.

Sequential disk read of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware.

Sequential disk rewrite of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. In many file systems, overwriting shows different performance characteristics than new writes.

Random disk read of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. The random read reads the same file as the sequential read, reading every block in the file exactly once but in random order. This provides some idea about the latency of disk access. Note that dividing the file into more random reads of smaller blocks will result in lower performance.

bench.tmpfile - file generated in current directory for disk speed test

Please report bugs to the author and send patches in unified diff format. (man diff for more information)

J. Bacon

Search for    or go to Top of page |  Section 1 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.