DaLime
DaLime
> If anyone has particular recommendations on what aspects of feature generation are most important to focus on in documentation, please feel free to discuss in this thread. I guess...
Recently I was dealing with other related problem. Now I start to think if it is one mask per image per epoch + some random augmentation that should be fine....
it is somwhere in line 288 but not sure what and why. temp = y.groupby(X[col].astype(str)).agg(['cumsum', 'cumcount']) temp is all NaN values.
I made some way around. But I am not 100% sure it solves the problem. However, it yields the same results I have with pandas 1.0.4.  t1 = y.groupby(X[col].astype(str)).cumsum()...
I think the problem is here: https://github.com/haoxiangsnr/Wave-U-Net-for-Speech-Enhancement/blob/c8c9d8945959ba8c3aa1e7cb18cddc10dbc52210/trainer/trainer.py#L78 should be: ```python if padded_length != 0: enhanced = enhanced[:,:,:-padded_length] mixture = mixture[:,:,:-padded_length] ```
@Lerry123 did you try on the same dataset? And what are advantages of training on VTCK? I am training on the same dataset, 500 epochs so far and the quality...
@Lerry123 got it, thanks! PESQ = 2.63, Is it with VTCK?
From AutoInt `To allow the interaction between categorical and numerical features, we also represent the numerical features in the same low-dimensional feature space. Specifically, we represent the numerical feature an...
I use 1.1.9 version, Linux and it does not work. Is it any work around it? Because I have no idea how to fix it.
Thanks a lot for your response, very much appreciated!