autofeat
autofeat copied to clipboard
Speed up tranform()
Thank you for the great product!
My question is - how can I speed up .tranform() function. I use for input dataframe with only ONE row, but the time needed to transform original features to new is to much and I didn't understand why. Isn't it just needed to apply stored formulas for the new dataset with original features?
May be you know how to speed up this process?
To be clear I compute real-time stream of data and need to do that so fast as possible. So, may be I can comment some checks at the code of library? Because It's seems library do some checks which slow down transformation.
The time that the .transform() function needs scales with the number of features that are computed, not so much with the number of data points it is applied to (since this is internally parallelised by numpy/pandas). So maybe your best option might be to manually check which features are computed when calling transform and then hard code your own routines for computing these features in your pipeline. Indeed there is otherwise a lot of computation done to make sure the transformation works for different types of features, NaNs, etc, which you probably don't need.
def fast(self, df):
feat_array = np.zeros((len(df), len(self.new_feat_cols_)))
for i, expr in enumerate(self.new_feat_cols_):
cols = [c for i, c in enumerate(self.feateng_cols_) if colnames2symbols(c, i) in expr]
f = lambdify([self.feature_formulas_[c] for c in cols], self.feature_formulas_[expr])
not_na_idx = df[cols].all(axis=1)
feat_array[not_na_idx, i] = f(*(df[c].to_numpy(dtype=float)[not_na_idx] for c in cols))
df = df.join(pd.DataFrame(feat_array, columns=self.new_feat_cols_, index=df.index))
return df
I cutten everything I thought unneeded from code. Now performance is better, but not yet enough. May be you have any idea how to make it a bit faster, to not implement formula applying on my side? )
you could probably parallelize the for loop, i.e., apply the transformations for each feature in parallel and then concatenate all the results and add them to the dataframe, but the slowest part is the sympy stuff that is happening in lambdify, where the symbolic function is translated into actual numpy computation, but this you only get rid of by implementing the transformations you need directly in numpy
Nice idea to fork for each feature. Thank you!