Open main menu

CDOT Wiki β

Winter 2010 Presentations/Storage Performance

Revision as of 18:58, 20 April 2010 by Dmchisho (talk | contribs) (Discovery)

Title

Storage Performance By: David Chisholm (dmchisho@learn.senecac.on.ca)

Introduction

In order to have our Koji Build Farm run as efficiently as possible we needed to find out which form of data storage would be the fastest overall. The candidates were:

  • PATA Hard Drive connected via USB
  • NFS share from HongKong
  • iSCSI network connection to HongKong

There are 3 main performance stats that we are concerned about when rating storage performance.

1. Read: The amount of data that can be read from the storage medium per second.
2. Write: The amount of data that can be written to the storage medium per second.
3. Access: Time required for a computer to process data from the processor and then retrieve the required data from a storage medium.

Another factor is cost. Since NFS and iSCSI are both network storage solutions they have no cost in themselves, but rely on network storage on a remote server. This price is simply the cost of the drives that will be installed in the remote storage server. A USB connected PATA or SATA drive requires both a hard drive and a PATA/SATA to USB interface such as an external drive enclosure.

Approach

Benchmark using a linux untiliy called Bonnie++ written by Russell Coker.

The Benchmark was run 3 times on each medium, the results were then averaged together.

The command used is as follows:

bonnie++ -d <location> -s 2048 -u root

Process

What happened while you worked on the problem? You had multiple iterations -- what happened at each milestone? Did you go down the wrong path and have to start over? What barriers did you encounter? The process was simple, find a storage solution that would result in the best build times while using the most efficient use of the storage resources available to us.

The main issue encountered was finding a repeatable benchmarking solution what would give the desired results while being able to test all 3 of our storage mediums. Common Linux tools such as the DD command are capable of doing disk benchmarking, but will only work of real devices and not network file systems.

The solution was Bonnie++, a Linux command line utility which gives an extensive amount amount of storage performance information while also having the ability to test all of our storage systems.

Discovery

We discovered that finding a viable benchmarking solution is harder then it sounds. Raw data will not always correspond with real results as it comes down to the application using those resources.

iSCSI seems to work, but only to a point.

We can login to an initiator, however, under heavy load the target receives invalid opcodes, causing the connection to fail. Experimenting with a /proc/cpu/alignment value of 3 (fixup+warn) did not clear the issue. Using the exact same target with a F12 x86_64 initiator is successful.

Results

Write

Transfer Speed Percentage Increase CPU Usage Percentage Increase
PATA 28,790 KB/s 0% 24% 0%
NFS 43,363 KB/s 50% 16% -50%
iSCSI 31,503 KB/s 9% 30% 25%

Read

Transfer Speed Percentage Increase CPU Usage Percentage Increase
PATA 25,991 KB/s 0% 10% 0%
NFS 51,789 KB/s 99% 85% 850%
iSCSI 59,147 KB/s 127% 84% 840%

Access

Access (per second) Percentage Increase CPU Usage Percentage Increase
PATA 121 0% 0% 0%
NFS 1201 1000% 35% 350%
iSCSI 2514 2077% 44% 44%

Links