autofeat icon indicating copy to clipboard operation
autofeat copied to clipboard

Speed up tranform()

Open Antitesla opened this issue 4 years ago • 6 comments

Thank you for the great product!

My question is - how can I speed up .tranform() function. I use for input dataframe with only ONE row, but the time needed to transform original features to new is to much and I didn't understand why. Isn't it just needed to apply stored formulas for the new dataset with original features?

May be you know how to speed up this process?

Antitesla avatar Apr 27 '21 10:04 Antitesla

To be clear I compute real-time stream of data and need to do that so fast as possible. So, may be I can comment some checks at the code of library? Because It's seems library do some checks which slow down transformation.

Antitesla avatar Apr 27 '21 11:04 Antitesla

The time that the .transform() function needs scales with the number of features that are computed, not so much with the number of data points it is applied to (since this is internally parallelised by numpy/pandas). So maybe your best option might be to manually check which features are computed when calling transform and then hard code your own routines for computing these features in your pipeline. Indeed there is otherwise a lot of computation done to make sure the transformation works for different types of features, NaNs, etc, which you probably don't need.

cod3licious avatar Apr 27 '21 11:04 cod3licious

def fast(self, df):
    feat_array = np.zeros((len(df), len(self.new_feat_cols_)))
    for i, expr in enumerate(self.new_feat_cols_):
        cols = [c for i, c in enumerate(self.feateng_cols_) if colnames2symbols(c, i) in expr]
        f = lambdify([self.feature_formulas_[c] for c in cols], self.feature_formulas_[expr])
        not_na_idx = df[cols].all(axis=1)
        feat_array[not_na_idx, i] = f(*(df[c].to_numpy(dtype=float)[not_na_idx] for c in cols))
    df = df.join(pd.DataFrame(feat_array, columns=self.new_feat_cols_, index=df.index))
    return df

Antitesla avatar Apr 27 '21 14:04 Antitesla

I cutten everything I thought unneeded from code. Now performance is better, but not yet enough. May be you have any idea how to make it a bit faster, to not implement formula applying on my side? )

Antitesla avatar Apr 27 '21 14:04 Antitesla

you could probably parallelize the for loop, i.e., apply the transformations for each feature in parallel and then concatenate all the results and add them to the dataframe, but the slowest part is the sympy stuff that is happening in lambdify, where the symbolic function is translated into actual numpy computation, but this you only get rid of by implementing the transformations you need directly in numpy

cod3licious avatar Apr 27 '21 20:04 cod3licious

Nice idea to fork for each feature. Thank you!

Antitesla avatar Apr 27 '21 20:04 Antitesla