Alexander Grimm
Alexander Grimm
You could write a save/load method for the network parameters in the Base_Agent. As actor-critic/q/gradient methods have different (number of) networks you could filter the Base_Agent.__dict__ class for nn.Module instances...
```python def _import_run(self, dst_exp_name, input_dir, dst_notebook_dir): exp_id = mlflow_utils.set_experiment(self.mlflow_client, self.dbx_client, dst_exp_name) exp = self.mlflow_client.get_experiment(exp_id) src_run_path = os.path.join(input_dir,"run.json") src_run_dct = io_utils.read_file_mlflow(src_run_path) run = self.mlflow_client.create_run(exp.experiment_id) run_id = run.info.run_id try: self._import_run_data(src_run_dct, run_id, src_run_dct["info"]["user_id"])...
Unfortunately not... However, I can skip over it with the raise_exception=False flag and can generate at least some data.
Are your embeddings normalized? This might be one reason
Test Checklist: - [x] web-search - [x] 1 knowledge base - [x] >1 knowledge bases - [x] file upload - [x] hybrid search - [x] full-context mode - [x] file-manual-full-context...
Conflict incoming with #12890
@tjbck would be great to see this merged or get feedback :) As mahenning closed his PR, there is no issue anymore.
Rebased to latest dev. After the parallelisation of query_doc for non-hybrid search (#13165 ) was merged to dev, merging this PR will add additional speedup for knowledge search. Of course...
@tjbck rebased to latest dev - still running and tests also pass because issues were fixed on dev.
I will take a look at it later today