overwrite=0 result in overwrites in Windows
Hello when using the following job spec fio --thread --name=1 --time_based --overwrite=0 --runtime=120 --unlink=1 --sync=sync --ioengine=windowsaio --direct=1 --rw=write --directory=d:\ --size=10000g --nrfiles=16 -numjobs=1 --iodepth=16
Writes still result into overwrites in Windows. ReFS allocations are in 64K How do I keep FIO from doing doing overwrites?
I think overwrite=0 means that fio will just do what it would otherwise do whereas overwrite=1 means that fio will make sure to lay out the file before starting the job.
overwrite=bool
If true, writes to a file will always overwrite existing data. If the file doesn’t already exist, it will be created before the write phase begins. If the file exists and is large enough for the specified write phase, nothing will be done. Default: false.
Perhaps one of the fallocate options will accomplish what you are seeking.
Hello @vincentkfu, this explains the fallocate in windows. https://github.com/axboe/fio/issues/833
IMHO, overwrite=0 should result on no overwrites at all, otherwise we are always overwriting which entirely different than first allocations.
"Note that using truncate on Windows will interact surprisingly with non-sequential write patterns. When writing to a file that has been extended by setting the end-of-file information, Windows will backfill the unwritten portion of the file up to that offset with zeroes before issuing the new write. This means that a single small write to the end of an extended file will stall until the entire file has been filled with zeroes."
If all what we can do is overwrites in Windows, then the perf numbers for Windows do not represent what the Windows storage stack can do. This is specially true with ReFS since to really test the storage subsystem with ReFS is to be able create and write multiple (in the 10s of thousands) new files, in multiple (hundreds) new and old directories.
The create_on_open takes care of the file-create on open.
This is one of the reasons I was surprised that we can't have a job to write to multiple directories, the lack of this functionality is a serious shortcoming. Although there are some caveats, ReFS allocates resources on a per directory basis, thus writing to a single directory we are seriously shortcoming our perf tests.
I believe that when we do Job1 --directory=DirA:..:DirZ --nrfiles=1000000 --openfiles=500 --iodepth=1024* --file_service_type=random --create_on_open=1 --filesize=4K-1m.
*For the most part ,1024 is miniport's max outstanding cmds.
FIO should be writing(until full) to 500 random files (and creating as necessary to maintain 500 files open), of random sizes between 4k and 1m, in random directories among DirA..DirZ while maintaining a queue depth of 1024.
Ideally, to truly test a filesystem we should be able to tell FIO create ndirs with dirdepth and nrleaves per dir and nrfilesperdir, with filesize=range. Now if we wanted to go a step higher in our perf tests of a filesystem as would be used in the real world, we should also have a range of create_on_open, overwrites, and appends.