laborer-lab

Results 10 comments of laborer-lab

亲问下有tensorflow,keras版本的预训练么

cv2.addWeighted can‘t solve this problem. because it's mapped from the heatmap

> I'm not sure what you mean. Do you mean that the activation map overlay should be thresholded? I think the blue area could restore original image pixel value, only...

我的也是这样,每个epoch出现这个

Is it the code you run in docker? You may need to increase the shared memory (shm) of docker settings

> > 看一下我开发的patmatch形状模板匹配 https://www.bilibili.com/video/BV1Ah4y1h7vs/?spm_id_from=333.999.0.0&vd_source=0bfccc45d8303cee561cd4a90452a69a > > 体验地址链接:https://pan.baidu.com/s/1qvlGxvTQDsTWujCmL1aMPw?pwd=16Ai > > 牛啊兄弟 在哪,我这边链接都打不开搜不到了

I had the same problem, max map.5-.95 and map.5 vary greatly, training time also too slow, but I used yolox_s

> Thank you for your excellent work. > > I'm having problems configuring the runtime environment for the code in the repository according to the [Requirements](https://github.com/Hanqer/deep-hough-transform#requirements). During the installation of...

感谢回复 现在碰到新的问题evaluate.py能跑通,train.py在加上low_level_model训练的时候出错 1.同时训练low_level_model时出错 1) https://github.com/Learning4Optimization-HUST/H-TSP/blob/ee91c49cc7bfd9fb76ae7e7f5a0631877c262675/h_tsp.py#L1299 这行代码在较新的pytorch_lightning即1.5.6版本开始修改了 原来的: ``` def attach_model_logging_functions(self, model): for callback in self.trainer.callbacks: callback.log = model.log callback.log_dict = model.log_dict ``` 现在的 ``` def _attach_model_logging_functions(self) -> None: lightning_module =...

这个用法不太熟悉,我在HTSP_PPO类和PathSolver类分别试了加入, ``` @pl.LightningModule.trainer.setter def trainer(self, trainer: Optional["pl.Trainer"]) -> None: for v in self.children(): if isinstance(v, pl.LightningModule): v.trainer = trainer # type: ignore[assignment] self._trainer = trainer ``` https://github.com/Learning4Optimization-HUST/H-TSP/blob/ee91c49cc7bfd9fb76ae7e7f5a0631877c262675/h_tsp.py#L1299 改写成 self.low_level_model.trainer._callback_connector._attach_model_logging_functions() 均不成功,能指导一下嘛...