file.isBusy() remains true after file.sync() when logging to exFAT using RingBuf on Teensy 4.1
I'm using SdFat with exFAT and RingBuf for high-frequency (1 kHz) data logging. Everything works fine until I periodically call file.sync() after every 100 sector writes. After some time, file.isBusy() remains true forever, preventing further calls to writeOut().
This is not a crash or hang — the code continues running — but isBusy() never clears after one of the sync() calls, effectively stalling the logging system.
below function is called when after gathering data when an interrupt occur in gpio input. it is not part of isr.
Earlier i was not using sync(), but then i was facing file corruption error after 2 - 3 hours of logging. this sync() is also blocking my code for 5-30ms is there any method to store data without code blocking?
void buff_fill()
{
rb.write(startBytes, NUM_START_BYTES);
rb.write(rtc._time, 7);
rb.write(tempBuff, 6);
rb.write(odostat);
rb.write(&sc, NUM_COUNT_BYTES);
rb.write(&x, sizeof(x)); // Write the time taken to write data to the file
if (rb.bytesUsed() >= SECTOR_SIZE && !file.isBusy())
{
if (!rb.writeOut(SECTOR_SIZE))
{
Serial.println("Failed to write");
digitalWrite(led, LOW); // Turn off LED on error
}
c++;
s = 1;
}
if (c > 100 && !file.isBusy())
{
x = micros();
file.sync(); // Ensure data is written to the file
x = micros() - x;
if (x > m)
m = x;
f = 0;
s = 2;
c = 0;
}
}
- Is file.sync() with exFAT guaranteed to eventually clear isBusy()?
- Is there a recommended sync() strategy when using RingBuf with exFAT?
- Any way to manually force-clear the busy flag or delay checking?
Thank you for any kind of help
You should not call sync(). It won't write data from the ring buffer. it will cause the file to be busy since it will cause the directory entry and internal sector cache to be written. It does nothing to the ring buffer.
Note, RingBuf.h has no internal calls to file.sync();
You never test these for an error with rb.getWriteError()). If the ring buffer fills, data will be lost.
rb.write(startBytes, NUM_START_BYTES);
rb.write(rtc._time, 7);
rb.write(tempBuff, 6);
rb.write(odostat);
rb.write(&sc, NUM_COUNT_BYTES);
rb.write(&x, sizeof(x)); // Write the time taken to write data to the file
Here is another RingBuf example. Note, it only access the file directly with two calls, file.isBusy() and file.curPosition(). These calls do not access the SD. isBusy just checks the level of an SD pin and curPosition accesses an internal class variable.
The whole purpose of RingBuf is to limit SD access to as few writes as possible and wait for not busy between writes.
All other file access is handled by the ring buffer. Also note shared SPI is not allowed.
Thank you for quick response. My real problem is file corruption and to tackle this i used file.sync() after some writes as you suggest in one of the issues regarding file corruption here. i need to log data for 2-3 days may be more than that. and right now file is getting corrupted even after 2hours of logging, i mean it is working for short duration like 1 hours but not more than that. after logging for long time when i check the card via card reader it gives me error "cant open file file corruption " something like this.
can you suggest me what is going on wrong here? i am using teensy4.1 and sandisk u3 v30 128gb sd card. to create file i call this function on start of logging
bool createNewFile()
{
// Close the current file if it's open
if (file.isOpen())
{
Serial.println("Closing existing file...");
file.truncate();
// file.sync(); // Ensure data is written before closing
file.close();
}
if (!file.open(fileName, O_WRONLY | O_CREAT | O_SYNC)) // O_RDWR | O_CREAT | O_TRUNC
Serial.println("File open failed");
else
Serial.printf("File opened successfully %s\n", fileName);
file.preAllocate(MAX_FILE_SIZE);
rb.begin(&file); // Initialize ring buffer with file object
getFileName();
return true;
}
this function is called for logging in loop
void buff_fill()
{
rb.write(startBytes, NUM_START_BYTES);
rb.write(rtc._time, 7);
rb.write(tempBuff, 6);
rb.write(odostat);
rb.write(&sc, NUM_COUNT_BYTES);
rb.write(&x, sizeof(x)); // Write the time taken to write data to the file
if (rb.bytesUsed() >= SECTOR_SIZE && !file.isBusy())
{
if (!rb.writeOut(SECTOR_SIZE))
{
Serial.println("Failed to write");
digitalWrite(led, LOW); // Turn off LED on error
// file.truncate(); // Truncate the file to remove any incomplete data
// file.close(); // Close the file to prevent corruption
// file.open(fileName, O_APPEND); // Reopen the file for writing
}
}
and this function is called to close the file
void closeFile()
{
if (file.isOpen())
{
Serial.println("Closing file...");
if (file.isBusy())
{
Serial.println("File is busy, waiting for it to finish...");
int busyCount = 0;
const int maxBusyCount = 10; // 500 * 10ms = 5 seconds max wait
while (file.isBusy())
{
Serial.printf("File is busy, waiting... (%d)\n", busyCount);
delay(1000); // Wait until the file is not busy
busyCount++;
if (busyCount > maxBusyCount)
{
Serial.println("File is busy for too long, exiting wait loop.");
break;
}
}
}
if (!file.isBusy())
{
rb.sync(); // Ensure all data is written to the file
delay(1000); // Wait for a short period to ensure data is written
// file.sync(); // Ensure data is written before closing
Serial.println("sync done");
}
if (!file.truncate())
Serial.println("File truncate failed");
if (!file.close())
Serial.println("File close failed");
}
}
Your problem will not be solved called by calling sync. Unless you have a crash, sync likely will make corruption more likely since it causes the directory entry to be rewritten frequently. SD cards have huge flash pages and store data in RAM buffers until forced to write flash.
If you use RingBuf with one sector 512 bytes writes, there will never be data in the sector cache so you will be rewriting the directory entry every time you call sync. This will cause the SD to move and remap the directory entry due to wear leveling.
The case sync helps with is a crash or power failure. sync() writes any data in the sector cache the writes the directory entry. If you do this occasionally, you may be able to read the file up to that point if a crash or power failure happens.
I have no idea what is causing file/SD corruption. Is there noise or possible power fluctuation? Also SD cards have large spikes in current consumption when flash is being programmed which can cause voltage spikes.
There is nothing you can do to make an SD go not busy. Once the SD needs to program flash it will go busy. When it finishes, it will go not busy.
If you add directory writes, the busy time can be very long since I must discontinue the contagious write of data. From the SD spec:
While the card should try to maintain that busy indication of write operation does not exceed 250ms in the case of SDXC card, if the card is not possible to maintain operations with 250ms busy, the card can indicate write busy up to 500ms including single and multiple block write in the following scenarios:
If an SD stays busy, there is a problem with the SD.
Have you tried other SD cards? Consumer grade cards have limited flash endurance.
TLC (Triple-Level Cell) flash, commonly found in consumer-grade cards, typically has around 500 write cycles per physical flash page.
Cards are starting to use QLC (Quad-level cell) that stores 4 bits of data per memory cell. These may have even less endurance.
This works since consumer cards have wear leveling algorithms that move and remap areas that get excessive writes. Still you are really hitting the directory area.
You may want to try high endurance cards like the SanDisk High Endurance or Max Endurance microSDXC cards. These are made for recording video in security cameras.
Your problem will not be solved called by calling sync. Unless you have a crash, sync likely will make corruption more likely since it causes the directory entry to be rewritten frequently. SD cards have huge flash pages and store data in RAM buffers until forced to write flash.
can you tell me how much data i can write once before closing a file. for example i opened a file and then i log data for some hours then close it. so how much data i can log. data logging can go for 3-4 days non stop and each day approx 4 to 5 gb of data will be logged and due to time budget i wont be able to open new file . so is there any limitation for file size.
Have you tried other SD cards? Consumer grade cards have limited flash endurance. You may want to try high endurance cards like the SanDisk High Endurance or Max Endurance microSDXC cards. These are made for recording video in security cameras.
yes. at first i was using a consumer grade san disk sd card. but now i am working with sanDisk High endurance and max endurance cards also
how much data i can write once before closing a file
With exFAT, which is the file system for SD cards larger than 32GB, you can log the entire SD so for a 2TB card, a bit less than 2TB.
ExFat is designed for large files. The maximum recommended size volume is 512 TB. Real limits are huge, see this.
You should preallocate a file larger than you want to use then truncate() it when you close it. If you don't preallocate the file, isBusy() won't work since a long search of the bitmap may happen to find space.
Some users preallocate files to fill the SD and have an array of open file structures. Then a switch is relative fast.
SdFat-beta has a modifications to make the unused part of files conform to the Specification. This will change how un-truncated files behave. So it is important to truncate files at validLength(). You can always close them and truncate them latter by using validLength().
Thank you for your support. Logging is working now and i think that format error was some hardware issue. Now i am not using call sync().
I am creating a big file of (10GB for now) log data logging.
i am confused here a little . should i use this
file.open(fileName, O_WRONLY | O_CREAT | O_TRUNC)
or this?
file.open(fileName, O_WRONLY | O_CREAT | O_SYNC)
or can i use both O_TRUNC and O_SYNC
Thanks once again
can i use both O_TRUNC and O_SYNC
I removed use of O_SYNC long ago. It does nothing. Also it is not recommended in Linux. It is still defined in fcntl.h which I use in most board packages. there are many other flags in fcntl.h which I don't check for.
A sync after every write with modern flash SSDs and SD cards kills performance.
Here is an example with a high end Samsung 256GB PRO Plus.
With no O_SYNC:
FILE_SIZE_MB = 1 BUF_SIZE = 512 bytes Starting write test, please wait.
write speed and latency speed,max,min,avg KB/Sec,usec,usec,usec 16948.07,2747,25,26
read speed and latency speed,max,min,avg KB/Sec,usec,usec,usec 15872.00,42,31,31
With O_SYNC enabled in write:
FILE_SIZE_MB = 1 BUF_SIZE = 512 bytes Starting write test, please wait.
write speed and latency speed,max,min,avg KB/Sec,usec,usec,usec 280.57,7759,1299,1824
read speed and latency speed,max,min,avg KB/Sec,usec,usec,usec 16128.00,36,31,31
A factor of 60 slower. O_SYNC kills performance of modern flash storage devices. Avoid sync() except to provide an occasional point where recovery from a crash may be possible.