axlsx
axlsx copied to clipboard
Memory usage for large datasets
Hi, I'm facing a memory issue when trying to generate a xlsx file with >600 columns and ~10k rows. The memory usage grows up to 3GB aprox and it's making the dyno be restarted by heroku.
I'm wondering if there is an known way to generate it without making a heavy use of memory.
The way I'm doing it can be represented by this script:
Axlsx::Package.new do |p|
p.workbook.add_worksheet(:name => "Test") do |sheet|
11_000.times do
sheet.add_row ["test data"] * 600
end
end
p.serialize('tmp/test.xlsx')
end
We're experiencing the same issue, perhaps RubyZip could be replaced with https://bitbucket.org/winebarrel/zip-ruby/wiki/Home ?
RubyZip isn't the issue I think, axlsx is. One of the problems is that axlsx creates an enormous string which is then passes to RubyZip. That's very inefficient..
Tried to fix it @ https://github.com/jurriaan/axlsx/tree/fix_enormous_string Please report if this helps (it should reduce usage to 1.5 GB or so)
@guilleva @michaeldauria Did this change work for you? It should be possible to reduce the memory usage even further, but it's a beginning :)
Hi jurriaan, thanks for looking into this, I haven't been able to test it, I really sorry, I will do my best to do it today and I will let you know.
Thanks again
Just pushed some new commits, the example now uses ~700 MB on my system..
I will try this out today and report back
It is possible it won't help in your situation though.. the fact is that atm axlsx allocates way too much objects and causes the heap to grow very big.. I'm trying to reduce the number of allocated objects..
This did not end up helping too much, memory seemed more stable, but we still ran out anyway.
We have a similar issue. We're building sheets with hundreds of thousands of rows and as many as 100 tabs. axlsx uses 5.3GB to build a 111MB xlsx file.
I would be happy to test some changes on this, but your branch is no longer available. Any further updates on this?
@jeremywadsack I've merged most of the changes into the master branch, but it's still using a lot of memory. I hope to have some time to look at this issue again in the coming weeks
Hi. Any updates on this?
Our Delayed Job workers aren't processing jobs (out of memory), probably because of this issue. Any updates on the status? Cheers!
+1 on this. We are having the same issue with huge datasets :/
We just ended up spinning up more hardware to deal with this for the time being. :/
Old thread. We're also having problems. 2.1 MB file needs 650MB of RAM.
Still waiting for a solution
Yeah. We would still love a solution. We run a 24gb server instance so we can build 15mb files.
- Jeremy On Sep 29, 2015 4:16 AM, "hassan abdul rehman" [email protected] wrote:
Old thread. We're also having problems. 2.1 MB file needs 650MB of RAM.
Still waiting for a solution
— Reply to this email directly or view it on GitHub https://github.com/randym/axlsx/issues/276#issuecomment-144029547.
Just had another build crash because a 400MB Excel file took 22GB of memory to build. (Incidentally, Excel used 1.5GB to open this file.)
In the python world we use xlsxwriter to generate Excel files which has a few options to reduce memory including flushing rows to disk as they are written. There are trade-offs there but it's possible something like that could be done with axlsx as well?
+1 for a solution/update for this.
+1 for a solution
First of all, thank you for this wonderful gem. I'm also having this problem when trying to generate 30k rows of text. As an alternative (and as opposed to increasing memory), does Axlsx have the capability to append data to an existing spreadsheet or join two of them without exhausting memory?
I'd at least be able to write multiple files and join them together at the end.
Maybe there could be an option of using a temp file instead of filling a StringIO object, once it is finished it can just place everything in the zip file. Thoughts?
The XLSX file isn't a single file. It's a collection of files that are zipped together. Which I think makes this more complicated.
The python library xlsxwriter has a feature like this:
The optimization works by flushing each row after a subsequent row is written. In this way the largest amount of data held in memory for a worksheet is the amount of data required to hold a single row of data.
This does introduce limits in how you can use it:
Since each new row flushes the previous row, data must be written in sequential row order when 'constant_memory' mode is on:
Perhaps something like this could be implemented in axlsx, though?
I haven't taken a look at it just yet, but I'm assuming it's just like a word doc... a bunch of XML files zipped together. However, with word docs you can indeed extract certain contents from the word/document.xml file, and I've done this to pull templates and insert data into other word docs (same process).
During the mean time, I'm going to try working on my own solution. Haven't seen much feedback from the guilleva here so not really expecting a fix anytime soon.
The XLSX file isn't a single file. It's a collection of files that are zipped together. Which I think makes this more complicated.
But the file that has the actual rows can it be still be written sequentially no?
looking inside an xlsx file I can see that the xl/_rel/sharedStrings.xml has the sheet data in it.
+1, this is a huge issue for me. I'd be happy to hear about possible workarounds.
+1, big issue for me too.
Would it help if I'd add a bounty via bountysource? @randym what do you think?
#352 is a similar request to append rows to an existing file.
I've been struggling with same issues, when generating > 100k rows our servers started to use swap and become very slow.
Solved by making own excel library https://github.com/paxa/fast_excel, it works 3-4 times faster and consume 5-20 MB ram regardless of data size