stata-parquet icon indicating copy to clipboard operation
stata-parquet copied to clipboard

Performance improvement for transposing data

Open kylebarron opened this issue 7 years ago • 46 comments

There's a tutorial about this on the Arrow C++ documentation: https://arrow.apache.org/docs/cpp/md_tutorials_row_wise_conversion.html

From Arrow to row-wise is the second half of the document.

kylebarron avatar Oct 31 '18 01:10 kylebarron

I think the trick will be to loop through stata row-wise. Atm I loop through Stata column-wise. It sounds like the performance loss in Arrow will be smaller than the loss in Stata.

mcaceresb avatar Oct 31 '18 02:10 mcaceresb

The Stata methods aren't threadsafe, right? So it has to be single threaded?

kylebarron avatar Oct 31 '18 03:10 kylebarron

I think it can me multi-threaded but I don't think it actually improved performance when I tested it (though I might have not done it right). I think what might benefit quite a bit from multi-threading is the read/write from/to parquet files, not Stata memory.

mcaceresb avatar Oct 31 '18 03:10 mcaceresb

I wouldn't be surprised if that is multi-threaded by default

kylebarron avatar Oct 31 '18 03:10 kylebarron

At least in Python, it reads Parquet files multi-threaded by default.

Sometime soon I'd like to try to go through your code more closely.

kylebarron avatar Oct 31 '18 03:10 kylebarron

I think that the way to structure this that might be faster:

  1. Read parquet file into Arrow table. Multi-threaded; should be fast.
  2. Arrow table into Stata memory, looping through Stata and the table in row-order. Might be slow.

Then the converse for write:

  1. Read data in memory to Arrow table looping through Stata and the table in row-order. Might be slow.
  2. Write arrow table to parquet file; should be fast.

At the moment, this reads the parquet file in column order and saves to Stata on the fly in column order as well. For writing, it reads the data in memory into a parquet table but, again, it loops through Stata in column order.

mcaceresb avatar Oct 31 '18 04:10 mcaceresb

Yes, I agree with all of that.

kylebarron avatar Oct 31 '18 04:10 kylebarron

I'm doing some benchmarking. Writing 10M rows and 3 variables once the data is in an arrow table takes 0.5s. Looping over Stata as it is atm also takes 0.5s.

Writing a that to a .dta file takes 0.2s.

mcaceresb avatar Oct 31 '18 04:10 mcaceresb

Even without further speed improvements, this package would be extremely helpful for anybody who uses Stata and {Python,R,Spark} (though R support for Parquet is still kinda limited), because it would mean that Stata could read binary data exported from one of those platforms.

kylebarron avatar Oct 31 '18 04:10 kylebarron

I wonder if it's not multi-threaded.

I would like to cut down processing time in half, ideally. I think that's plausible, but I doubt it can ever be faster than reading/writing .dta files directly (otherwise, I mean, what's the point of dta files; I imagine that there is no looping over entries in that case and Stata just writes that in bulk.)

mcaceresb avatar Oct 31 '18 04:10 mcaceresb

I doubt it can ever be faster than reading/writing .dta files directly

You're comparing reading the entire .dta file into Stata with the entire .parquet file... That's not necessarily the right comparison. Reading the first column of the first row group in Parquet is extremely fast. Doing

use col1 in 1/1000 using file.dta

is sometimes extremely slow. I originally was frustrated because when you do

use in 1 using file.dta

it has to load the entire file just to read the first row of the data!

So if there are huge (~500GB) files that can be split into say ~20 row groups, that's something that Parquet could excel at.

kylebarron avatar Oct 31 '18 04:10 kylebarron

Nicely enough, it takes literally a third of the time (one col vs 3)

mcaceresb avatar Oct 31 '18 04:10 mcaceresb

Haha yeah, that's been my experience as well. Generally it's linear in the amount of columns you read. And since a ton of data analysis only cares about a few columns out of a dataset of 300, the columnar file type can really make a difference.

kylebarron avatar Oct 31 '18 04:10 kylebarron

It sounds like individual columns can be chuncked. I think I can only implement the solution suggested in the apache docs if the number of chunks and each chunk size is the same.

I suppose most flat data would be like that, tho. Need to check first and fall back to out of order if each column is not stored in the same way.

mcaceresb avatar Nov 01 '18 04:11 mcaceresb

I think that each column can only be chunked inside a row group. So if the first row group is 10,000 rows, then there won't be any chunks smaller than that for the first 10,000 rows. I'm not sure if that sentence makes sense

kylebarron avatar Nov 01 '18 04:11 kylebarron

I think a row group is only relevant when reading the file from disk, not when iterating over the table already in memory.

mcaceresb avatar Nov 01 '18 04:11 mcaceresb

I've been trying this out on the server on modestly large data that I've been using for a project (few GiB) and compression is amazing! Performance for traversing several variables in Stata in column order is pretty poor, though, specially if there are a ton of strings.

I won't spend any time optimizing the row vs column-order thing until we figure out how the Java version fares, but it's pretty cool to see a fairly complicated 21GiB file down at 5GiB.

mcaceresb avatar Nov 02 '18 19:11 mcaceresb

Yes, the compression is amazing. Using Parquet files with something like Dask or Spark completely opens up doing computation on 20Gb files on a laptop.

kylebarron avatar Nov 02 '18 20:11 kylebarron

Just bumping this in case you had any great discovery in the last few months.

Since you're still on my DUA, can you try this command:

parquet use /disk/agebulk3/medicare.work/doyle-DUA51929/barronk-dua51929/raw/pq_from_spark/100pct/med/med2014.parquet

That parquet directory is 2.6GB, but the command has been running for 11 minutes and hasn't finished...

It would be really awesome if you had some way of making a progress bar.

kylebarron avatar Feb 12 '19 16:02 kylebarron

Yup. Had this on the back of my head. Don't think it'd take too long. Format ideas?

Reading [###        ] X% (obs i / N; group r / R)

?

mcaceresb avatar Feb 12 '19 16:02 mcaceresb

Yeah that seems great

kylebarron avatar Feb 12 '19 16:02 kylebarron

linesize is a problem ):

mcaceresb avatar Feb 12 '19 17:02 mcaceresb

why?

kylebarron avatar Feb 12 '19 17:02 kylebarron

When I try to print the timer, it often gets broken up by timeline, so it looks all wrong.

I also can't get the formatting to work right. In my tests stata prints at the end of the program, not as the program is executing. I suspect it's waiting for a new line...

mcaceresb avatar Feb 12 '19 18:02 mcaceresb

Weird. Btw did you try this?

parquet use /disk/agebulk3/medicare.work/doyle-DUA51929/barronk-dua51929/raw/pq_from_spark/100pct/med/med2014.parquet

kylebarron avatar Feb 12 '19 18:02 kylebarron

I'm just going to have it report ever 30 seconds or something like that.

mcaceresb avatar Feb 12 '19 18:02 mcaceresb

parquet desc seems to be working. Takes 3 seconds for me, so that's nice. Allocating the memory for the target data has been running for a minute or so, tho. It hasn't even started importing the data...

mcaceresb avatar Feb 12 '19 19:02 mcaceresb

It ended up taking 26.5 minutes for me: image

kylebarron avatar Feb 12 '19 19:02 kylebarron

Wow. So, basically, it's Stata that chocked? Mmm... Is there a way to add 381 variables that doesn't take 23 minutes? I think it might be somewhat faster if I specify mata not initialize the variables (or maybe mata performance deteriorates with these many variables? I tested it and it was faster than looping gen...)

mcaceresb avatar Feb 12 '19 19:02 mcaceresb

I don't know... This was basically the first real-world file I've tried to read into Stata with this

kylebarron avatar Feb 12 '19 19:02 kylebarron