Tianlun Zheng
Tianlun Zheng
I get the same issue in train.py , who can help me?
I run the train.py but i got the issue , the checkpoint_mlt/ is download from the readme .who can help me the error is belowed: Traceback (most recent call last):...
将它提取到data/dataset/里即可
你好,我也遇到这个问题了,tf版本是1.8, 可以吧你重写的demo,py发给我吗,跑了三天,还是没通,十分感谢
> @simplify23 先下载这个模型: > 链接:https://pan.baidu.com/s/1gm0Uq_sLe00En-IbUPiQUg > 提取码:qcco > 复制这段内容后打开百度网盘手机App,操作更方便哦 > > 然后可以参考这个代码使用: > https://github.com/bing1zhi2/chinese_ocr/blob/master/chinese_ocr/predict_tf_tool.py > 这个项目我还在更新,刚刚提交了一版 感谢您的项目,看到您的项目参考了两个中文OCR项目。想问一下您具体是做什么的,您觉得yolo3+crnn和ctpn+densenet+ctc各有什么优势和缺点吗,非常感谢
> 量化后的模型只能用PaddleInference或PaddleLite进行部署推理: > PaddleInference: https://paddleinference.paddlepaddle.org.cn/product_introduction/summary.html > PaddleLite: https://paddlelite.paddlepaddle.org.cn/introduction/tech_highlights.html 你好,想问一下。我使用PPSlim来对模型进行压缩,由于PPSlim量化完并不直接减少模型体积,需要用Lite转换。Lite转换的模型格式变为.nb或model+params。但是由于某些要求,希望格式仍为inference的.pdmodel和.pdiparams,这一块有什么转换的方法吗
> 可以将Lite转换逻辑集成到你的预测代码里,每次启动服务加载模型前读取.pdmodel和.pdiparams并做一次转换。 可以提供转换的代码地址吗?lite没有专门这一块的说明
We are still doing preliminary experiments. We want the V2 to be faster and lighter while being able to accommodate more application attempts. Individual modules require further analysis and testing....
We didn't try to do that. You can refer to MMOCR or PaddlePaddleOCR to do that.
#1 maybe this issue can help you. If that still doesn't satisfy you, you can contact me