fst
fst copied to clipboard
write_fst Seems To Skip Small Tables When Writing In A for Loop
I am using fst
version 0.9.8 with R-4.3.2.
I am writing 'data.table' class data frames in a for
loop. The tables have different number of rows as they are the result of in-silico chemical modification of a list of peptides (protein fragments).
When writing these data tables to csv
format using data.table::fwrite
(with append = TRUE), all modified peptides are written correctly and all are present.
When writing them to fst
format with compress = 50
or compress = 100
, data tables with 10- 11 rows (resulted from peptides with 1-2 modifications) are skipped while the bigger ones are written as expected.
Unfortunately proprietary rights do not allow me to present a full example, just the code for write_fst
with argument uniform_encoding
set to default (for speed as there are millions of such tables to be written):
write_fst(dt1, paste0(fname, '-', i, '.fst'), compress = 100)
Here, dt1
is a 'data.table' class data frame residing in memory. fname
is a character length 1 , i stands for current iteration and the arguments of paste0
form the unique name of the fst
file written to disk. The fst
tables that are written have names formatted as expected.
The upstream code is the same, the only difference at this point is the file format writing decided by an if
control selected by User: if (compress == yes) write "fst" else write "csv"
.
Thank you!
I have personally never seen this behaviour, and am unable to replicate it based on the information given. A few questions whose answers might enable you to narrow down the issue without de-anonymising your data include:
- If the
compress
argument is set to be 0, are the 10-11 row tables successfully written? (If so, this may be an issue with the compression fst does) - If you reduce the number of columns of
dt1
you are trying to write, or limit to a certain number of column classes (e.g. just writing out the numeric columns via:dt1[, .SD, .SDcols=is.numeric]
), are the 10-11 row tables successfully written? (If so, this may be an issue with the fst package being unable to write certain column types, or certain number of columns) - If you change the order of your loop so that all the larger ones are written first and then the 10-11 row tables written afterwards, are the 10-11 row tables successfully written? (If so, I'm not sure what could be causing the issue)
While there's no reproducible example, I doubt there's a lot more help that can be provided, but following the advice in this article may help you create one: https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example
@AndyTwo2 My answers bullet-by-bullet:
-
Frankly, I haven't tried zero compression and the job is now running, expecting an over 2 billion-row table written in chunks as
fst
files. I have actually thought that compression might do something and tried the default and 100 values - no zero though. I am not sure I have mentioned before but the screen message confirms that small tables are being written to disk same as the long ones. However only the long ones are on disk. Could be a disk bus/buffer issue? -
Any of these suggestions is not possible with current data. Each table contains character and numeric columns. Separating them is not possible as this is an intermediate process: somebody else is transferring them to big query. We should have sent them directly to big query but then would have been concerned with connection drop and other events - these are very long jobs.
-
This one I have had tried to no avail: same outcome.
When the big job is complete I will go back through all the archives containing fst
files and check which peptides have been written and which have not. Then, reiterate on the left-out ones. Tables of comparable lengths found in the same loop are being written as it should.
Thanks for suggestions!
@AndyTwo2 I should have probably mentioned that although different in number of rows, all tables contain the same columns - in reference to second bullet in your reply.
Also, I have had tried both options of uniform_encoding
. Same result
Thank you!