Asad Abbas
Asad Abbas
any update? I'm also getting this error.
This approach looks really promising, I hope the author will take some time to elaborate further on preprocessing steps on a custom dataset.
Thanks for your explanation. I'm trying to understand this a bit more clearly: ``` def create_input_batch(batch, is_minknet, device="cuda", quantization_size=0.05): if is_minknet: print("pre", batch["coordinates"][:, 1:]) print("pre", batch["coordinates"][:, 1:].shape) batch["coordinates"][:, 1:] =...
yeah same problem
Similarity search not working well when number of ingested documents is great, say over one hundred.
I'm also facing a similar problem. Is there a way to fine-tune embedding models on custom datasets (e.g. text from 100 PDFs)? For this purpose, do I need to label...
Thanks, @oguiza I tried `(803, 1, 60)` i.e. `(n_samples, 1, n_outputs)`, but still got the same error.
Thanks @oguiza , Still, for me it's a bit confusing, I'm following the discussion at PatchTST https://github.com/yuqinie98/PatchTST/issues/26 And the author said that it is possible to do `multivariate predict univariate`...
I'm also looking for the best way of creating dataset. I suppose we have to manually create some dataset (instructions/ output) manually and then can use self instruct to expand...