In your “benchmarks” there was an effect of built-in cache of the HDD which reduced the resulting latency of I/O requests by storing them in its internal cache.
Also, you should keep in mind that such tests would be quite dependent on the filesystem used and additional hardware. For the current SSDs used, they have quite large ACTUAL sector size, in many cases larger than those of HDDs. So, if you run such tests without caching, they naturally would appear quite slow, because writing without caching (for FAT, for example) would require at least 3 write operations – writing to cluster allocation table, writing to directory file and writing to the file itself. With misaligned access the number of actual pages written would be even higher, resulting in worse results for random data writing patterns (worse than those for HDD). But if you use a SSD with some amount of on-board DRAM cache (like Intel X25-E), the results may be better.
Maybe the whole situation is going to be better for the future SandForce controllers, as it is possible to utilize different channels for different, smaller-sized requests (instead of parallel I/O operations) at the cost of additional space used, more advanced and complex controller logic and (most likely) some DRAM cache. But as of present, it is a weak side of the mainstream SandForce SSDs. Well, even with better controller (better for random access patterns), the theoretical throughput for SSD with this kind of NAND would drop below 30 MB/s on small files because of no benefit from multiple channels (like 8 channels for SandForce SSDs) and higher actual data written.
As a final word the “benchmarks” used by you are typical rather for enterprise data access patterns, so usual HDDs as a more “universal” solution would naturally perform better than the mainstream SSDs (but not the “enterprise” ones).
↧
Corsair Force F120 Review - Unlimited SandForce SF-1200 Performance
↧