GUEST.it
GUEST.it
So, if the writer wants to be sure that data are really wrote on disks, it can't because even by using dsync data is not always wrote to disk in...
Ok but regardless the writing pattern, if data is written an synced before write operation is confirmed, disks speed will always be the bottleneck. You can't reach 180MB/s if *any*...
Today or tomorrow I'll try to strace the chunkserver process during a "dd" And what about the huge performance drop seen inside a VM booted from lizardfs? I can understand...
Do you need a full strace dump or just some specific calls (if yes, which calls?)
This is a strace on chunkserver process when writing 1GB file with `dd if=/dev/zero of=prova bs=1M count=1000 oflag=direct` ``` [pid 12726] open("/export/sda/chunk/chunks00/chunk_00000000000015A2_00000001.mfs", O_RDWR|O_CREAT|O_TRUNC, 0666) = 11 [pid 12725] open("/export/sda/chunk/chunks00/chunk_00000000000015A3_00000001.mfs", O_RDWR|O_CREAT|O_TRUNC,...
`fsync` is called *after* chunk is wrote. That's not the same as honoring what client is asking for, in example, writing synchronous and could lead to some (bad) race condition....
> With PERFORM_FSYNC on, it goes as follows: client sends data -> chunkserver receives data -> chunkserver writes data to disk -> chunkservers syncs data so it's flushed from cache...
> But, in this way would be impossible for a client to write data in asynchronous mode, if they need. > For example, some not-so-important data could be wrote in...
> That would be handled either at the fuse layer - file is written async and fuse returns without waiting for an ack, or the VM layer where you would...
What i'm trying to figure out is why, inside a VM, with: `dd if=/dev/zero of=test bs=1M count=5000 oflag=sync` i'm getting about 30-40MB/s (and stracing a chunkserver process I can see...