dave_ at July 4th, 2006 08:12 — #1
I've got a system where I need to open a large number of files and gradually build them up. These files are pretty large so I need to keep them on disk.
Can anyone recommend the best way to access these files?
At the moment I open the file and keep it open. Each time I need to write a block I seek to the place and write there.
Its thrashing the HDD a bit. I'm just wondering what the best way of writing to the hard drive is?
Edit: What I'm looking for are just rules of thumb for good file access patterns.
reedbeta at July 4th, 2006 08:48 — #2
Try keeping a cache of writes to each file and when the cache grows large enough, commit all the data to disk at once.
Another thing to look into would be memory-mapped I/O, which lets the OS make the decisions about when to do the disk writes. It could be more efficient if the OS is smart or has a smart driver.
Also, what kind of platform are you running this on? If you have any control over the hardware, you should look into using a hard drive interface that supports Native Command Queuing, such as SATA. This lets the disk itself take advantage of knowledge about the physical location of files to schedule writes and reads so that they execute as fast as possible.
dave_ at July 4th, 2006 09:01 — #3
Its an embedded platform (a special PVR) so I dont have very much (including documentation)
I'm thinking that the only way I'm going to improve things is to cache the writes.
Of course I dont have very much memory so how do I figure out the best size for my cache? Is there some kind of disk block size? I've seen 64k in a few articles.
reedbeta at July 4th, 2006 21:46 — #4
I think the only way to find the optimum cache size will be to set up a test scaffold and experiment. 64K sounds like a good starting point, and try going up and down by powers of 2 from there.