Difference between revisions of "Talk:Winter 2009 SYA810 Block Device Benchmark Scripts"

From CDOT Wiki
Jump to: navigation, search
(Take Note!)
 
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 
Here's something interesting I found while researching the topic, from the creator of the [http://www.textuality.com/bonnie/ bonnie] benchmark:  
 
Here's something interesting I found while researching the topic, from the creator of the [http://www.textuality.com/bonnie/ bonnie] benchmark:  
 
<blockquote>
 
<blockquote>
"...note that the Bonnie results when your memory’s bigger than your test data are generally bogus, since well-designed Unix-lineage systems... try hard to buffer everything to avoid doing I/O. The only way to defeat this and actually test I/O rates is to completely flood the available buffer space. This is the right thing to do, because in many production applications, memory is maxed out anyhow, so the actual I/O rate (what Bonnie measures) becomes an important performance-limiting factor."  
+
"...note that the Bonnie results when your memory’s bigger than your test data are generally bogus, since well-designed Unix-lineage systems... try hard to buffer everything to avoid doing I/O. The only way to defeat this and actually test I/O rates is to completely flood the available buffer space. This is the right thing to do, because in many production applications, memory is maxed out anyhow, so the actual I/O rate (what Bonnie measures) becomes an important performance-limiting factor." [http://www.tbray.org/ongoing/When/200x/2004/11/16/Bonnie64]
 
</blockquote>
 
</blockquote>
 +
...and again, from the website:
 +
<blockquote>
 +
"It is important to use a file size that is several times the size of the available memory (RAM) - otherwise, the operating system will cache large parts of the file, and Bonnie will end up doing very little I/O. <b>At least four times</b> the size of the available memory is desirable." (emphasis added)[http://www.textuality.com/bonnie/advice.html]
 +
</blockquote>
  
 
In other words, in order to get any meaningful results out of a hard drive performance test, your script has to produce <b>more hard drive I/O than the computer has RAM. </b> If I understand this correctly, this means that (for example) if your computer has 1GB of RAM, then the script has to read/write at least 1GB before it starts to actually stress the hard drive. Otherwise, the data gets stored in your ram and never touches the hard drive, you have effectively benchmarked the read/write speed of your ram. :) Note that this can come in the form of lots of little files, totaling more than 1GB, or one giant file bigger that 1GB. In fact, it probably best if you do both.  - --[[User:Evets|scarter4]] 14:42, 22 January 2009 (UTC)
 
In other words, in order to get any meaningful results out of a hard drive performance test, your script has to produce <b>more hard drive I/O than the computer has RAM. </b> If I understand this correctly, this means that (for example) if your computer has 1GB of RAM, then the script has to read/write at least 1GB before it starts to actually stress the hard drive. Otherwise, the data gets stored in your ram and never touches the hard drive, you have effectively benchmarked the read/write speed of your ram. :) Note that this can come in the form of lots of little files, totaling more than 1GB, or one giant file bigger that 1GB. In fact, it probably best if you do both.  - --[[User:Evets|scarter4]] 14:42, 22 January 2009 (UTC)
 +
 +
As far as writes go, you can use fsync() to flush the buffers to disk -- the 'sync' command will do this. For reads, you'll need to clear or exceed cache in order for this to work. There are a couple of ways of doing this; one is to fill the available RAM so that little or none is available for buffering, another is to manipulate stuff in /proc
 +
 +
--[[User:Chris Tyler|Chris Tyler]] 15:16, 22 January 2009 (UTC)

Latest revision as of 10:16, 22 January 2009

Here's something interesting I found while researching the topic, from the creator of the bonnie benchmark:

"...note that the Bonnie results when your memory’s bigger than your test data are generally bogus, since well-designed Unix-lineage systems... try hard to buffer everything to avoid doing I/O. The only way to defeat this and actually test I/O rates is to completely flood the available buffer space. This is the right thing to do, because in many production applications, memory is maxed out anyhow, so the actual I/O rate (what Bonnie measures) becomes an important performance-limiting factor." [1]

...and again, from the website:

"It is important to use a file size that is several times the size of the available memory (RAM) - otherwise, the operating system will cache large parts of the file, and Bonnie will end up doing very little I/O. At least four times the size of the available memory is desirable." (emphasis added)[2]

In other words, in order to get any meaningful results out of a hard drive performance test, your script has to produce more hard drive I/O than the computer has RAM. If I understand this correctly, this means that (for example) if your computer has 1GB of RAM, then the script has to read/write at least 1GB before it starts to actually stress the hard drive. Otherwise, the data gets stored in your ram and never touches the hard drive, you have effectively benchmarked the read/write speed of your ram. :) Note that this can come in the form of lots of little files, totaling more than 1GB, or one giant file bigger that 1GB. In fact, it probably best if you do both. - --scarter4 14:42, 22 January 2009 (UTC)

As far as writes go, you can use fsync() to flush the buffers to disk -- the 'sync' command will do this. For reads, you'll need to clear or exceed cache in order for this to work. There are a couple of ways of doing this; one is to fill the available RAM so that little or none is available for buffering, another is to manipulate stuff in /proc

--Chris Tyler 15:16, 22 January 2009 (UTC)