Hi,
Could you recommend me a good speed test for the hard drive? I noticed some abismal transfer rates and I want to know if someone else is having this issues.
Thanks
New user's registration have been closed due to high spamming and low trafic on this forum. Please contact forum admins directly if you need an account. Thanks !
Hard drive speed test
Re: Hard drive speed test
One way is to write pure data to the drive, measures raw write speed from nothing to disk.
Note that the bottleneck in this system is not the HDD itself, but the CPU and memory bus (above example gives 100% CPU load). If you want to include the network as bottleneck you can try a pure network copy from FTP or HTTP to disk, for instance wget from a source on your local network, faster than the B3. This usually maxes out at around 30-60 MBytes/s depending on protocol (FTP, HTTP, SAMBA, AFP etc) and transfer direction (read from disk is faster than write).
If you do network transfers over an encrypted protocol (SSH, SCP) this gives heavy CPU penalty since the CPU in B3 does not handle these calculations well. You will reach approximately 10-15 MBytes/s.
Code: Select all
root@b3-media:/home/johannes# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 11.1926 s, 73.2 MB/s
If you do network transfers over an encrypted protocol (SSH, SCP) this gives heavy CPU penalty since the CPU in B3 does not handle these calculations well. You will reach approximately 10-15 MBytes/s.
/Johannes (Excito co-founder a long time ago, but now I'm just Johannes)
Re: Hard drive speed test
This got me a bit interested due to my recent disk-fiddlery (probably more disks than Excito ever imagined attached to the B3, conversion to EXT4, etc).
So I replicated Johannes test:
Appalling! Then I realized that I was watching 720P video streamed from the same B3 during the test (hrm, sorry).
Second result (no video watching this time) was much better:
But why would Johannes use chunks of 8k at the time on a file system with 4k block size?
Does such things even matter?
Apparently not much.
Besides, none of these tests are against a physical volume, but a logical one - so might be completely irrelevant
My personal observations on network transfer speed:
I find NFS to be the absolutely quickest for copying large files over the local network.
Applications that might just stop in the middle of an operation, does seeks all the time, and generally misbehaves (like my media player) fares a lot better using SMB.
When using ssh (or SFTP), I have noticed that for some mysterious reason, a single transfer will often settle at some speed and then stay there - even if there is plenty more bandwidth and CPU resources (even on the B3) to go around.
If you then start a second SFTP transfer (from the same host), it reaches the same speed as the first one - so you have doubled the transfer rate by dividing it into two transfers! You can keep doing this a couple of times until you hit CPU/Memory limitations.
But why? Is there something in the protocol itself that tries to govern this?
So I replicated Johannes test:
Code: Select all
root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 72.5146 s, 11.3 MB/
Second result (no video watching this time) was much better:
Code: Select all
root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 9.22257 s, 88.8 MB/s
Does such things even matter?
Code: Select all
root@b3:/home/niklas# dd if=/dev/zero of=/home/storage/test bs=4k count=200000
200000+0 records in
200000+0 records out
819200000 bytes (819 MB) copied, 9.35668 s, 87.6 MB/s
Besides, none of these tests are against a physical volume, but a logical one - so might be completely irrelevant

My personal observations on network transfer speed:
I find NFS to be the absolutely quickest for copying large files over the local network.
Applications that might just stop in the middle of an operation, does seeks all the time, and generally misbehaves (like my media player) fares a lot better using SMB.
When using ssh (or SFTP), I have noticed that for some mysterious reason, a single transfer will often settle at some speed and then stay there - even if there is plenty more bandwidth and CPU resources (even on the B3) to go around.
If you then start a second SFTP transfer (from the same host), it reaches the same speed as the first one - so you have doubled the transfer rate by dividing it into two transfers! You can keep doing this a couple of times until you hit CPU/Memory limitations.
But why? Is there something in the protocol itself that tries to govern this?