Page 1 of 1

Hard drive speed test

Posted: 02 Aug 2013, 09:57
by vladi
Hi,

Could you recommend me a good speed test for the hard drive? I noticed some abismal transfer rates and I want to know if someone else is having this issues.

Thanks

Re: Hard drive speed test

Posted: 02 Aug 2013, 12:58
by johannes
One way is to write pure data to the drive, measures raw write speed from nothing to disk.

Code: Select all

root@b3-media:/home/johannes# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 11.1926 s, 73.2 MB/s
Note that the bottleneck in this system is not the HDD itself, but the CPU and memory bus (above example gives 100% CPU load). If you want to include the network as bottleneck you can try a pure network copy from FTP or HTTP to disk, for instance wget from a source on your local network, faster than the B3. This usually maxes out at around 30-60 MBytes/s depending on protocol (FTP, HTTP, SAMBA, AFP etc) and transfer direction (read from disk is faster than write).

If you do network transfers over an encrypted protocol (SSH, SCP) this gives heavy CPU penalty since the CPU in B3 does not handle these calculations well. You will reach approximately 10-15 MBytes/s.

Re: Hard drive speed test

Posted: 02 Aug 2013, 15:35
by Cheeseboy
This got me a bit interested due to my recent disk-fiddlery (probably more disks than Excito ever imagined attached to the B3, conversion to EXT4, etc).
So I replicated Johannes test:

Code: Select all

root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 72.5146 s, 11.3 MB/
Appalling! Then I realized that I was watching 720P video streamed from the same B3 during the test (hrm, sorry).
Second result (no video watching this time) was much better:

Code: Select all

root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 9.22257 s, 88.8 MB/s
But why would Johannes use chunks of 8k at the time on a file system with 4k block size?
Does such things even matter?

Code: Select all

root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=4k count=200000
200000+0 records in
200000+0 records out
819200000 bytes (819 MB) copied, 9.35668 s, 87.6 MB/s
Apparently not much.
Besides, none of these tests are against a physical volume, but a logical one - so might be completely irrelevant :-)

My personal observations on network transfer speed:
I find NFS to be the absolutely quickest for copying large files over the local network.
Applications that might just stop in the middle of an operation, does seeks all the time, and generally misbehaves (like my media player) fares a lot better using SMB.
When using ssh (or SFTP), I have noticed that for some mysterious reason, a single transfer will often settle at some speed and then stay there - even if there is plenty more bandwidth and CPU resources (even on the B3) to go around.
If you then start a second SFTP transfer (from the same host), it reaches the same speed as the first one - so you have doubled the transfer rate by dividing it into two transfers! You can keep doing this a couple of times until you hit CPU/Memory limitations.
But why? Is there something in the protocol itself that tries to govern this?