• Welcome to Valhalla Legends Archive.
 

Reading Efficiency

Started by FrostWraith, October 01, 2007, 10:01 PM

Previous topic - Next topic

FrostWraith

I was just curious in all programming languages around the board, if a data source is to be read byte-per-byte, what sized chunks should be used for certain ranged data sizes.  For example:
1 byte - 1024 bytes: fgets(res, 255)
1025 bytes - 1048576 bytes: fgets(res, 2550)
etc...

I know that the larger chunks you read at a time, the faster it is, but what seems to be appropriate if you don't want to slow your processor with multiple requests?

Yegg

I'd imagine there wouldn't be much of a difference in speed if you read a file doing 1024 bytes per read as opposed to 1024 * 2 bytes per read or so on, unless the file was incredibly large, in which case you may actually notice lag regardless of how much you read per request. I don't think it should be something to worry about, I usually go with 1024 bytes.

Banana fanna fo fanna

I believe there is an optimal buffer size which varies per OS (and per OS settings?), but this is not my area of expertise. If I were in this situation I'd do a quick Google, and if it didn't find anything, write a quick test application that benchmarks various chunk sizes against each other.

Yegg

Quote from: Banana fanna fo fanna on October 01, 2007, 10:20 PM
I believe there is an optimal buffer size which varies per OS (and per OS settings?), but this is not my area of expertise. If I were in this situation I'd do a quick Google, and if it didn't find anything, write a quick test application that benchmarks various chunk sizes against each other.

I was going to suggest a benchmark also. However, he'll need to create a very large application and the difference in speed probably wouldn't be worth it even for a file 1GB in size. I could be wrong, but I think my idea is pretty accurate.

squiggly

4K is pretty standard on NTFS, best to go with that
- Posso usar um tradutor de lĂ­ngua, devo ser fresco agora!