zaobao
zaobao
我看使用方法里没有设置这两个阈值的参数 这两个参数是不可变的吗 corpus: 必需,file open()、database connection或list example:corpus = open(file_name, 'r', encoding='utf-8') corpus = conn.execute(query) corpus = list(***) top_k: float or int,表示短语抽取的比例或个数 chunk_size: int,用chunksize分块大小来读取文件 min_n: int,抽取ngram及以上 max_n: int,抽取ngram及以下 min_freq: int,抽取目标的最低词频 top_k是按照词频排序,还是按照左右邻字丰富程度或者内部凝聚程度排序
## Description dynamic quantization (weights quantized with activations read/stored in floating point and quantized for compute) Will this change the current api? - Model.quantize Who will benefit from this enhancement?...
I found that the inference speed in TorchScript(converted by [model_zoo_importer.py](https://github.com/deepjavalibrary/djl/blob/5c691c0979d92702f7bfe1aba2f989d8c9b1c568/extensions/tokenizers/src/main/python/model_zoo_importer.py#L23)) slower than in default PyTorch format. ``` from sentence_transformers import CrossEncoder from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch import time...
DJL version 0.27.0 When a null-value sequence is passed to a text-encoder model, the JAVA process crashes. Error output: ``` thread '' panicked at src/lib.rs:217:14: Couldn't get java string!: NullPtr("get_string...
It looks like FLOAT16 has not been supported by now ``` Caused by: java.lang.UnsupportedOperationException: type is not supported: FLOAT16 at ai.djl.onnxruntime.engine.OrtUtils.toDataType(OrtUtils.java:101) at ai.djl.onnxruntime.engine.OrtNDArray.getDataType(OrtNDArray.java:65) at ai.djl.onnxruntime.engine.OrtNDArray.toByteBuffer(OrtNDArray.java:121) at ai.djl.pytorch.engine.PtNDManager.from(PtNDManager.java:55) at ai.djl.pytorch.engine.PtNDManager.from(PtNDManager.java:31) at...
### Environment Info Container: Docker with NO GPU OS: AlmaLinux CUDA installed: 12.2 Cudnn installed: 8.9.0 djl version: 0.29.0 onnxruntime_gpu version: 1.8.0 ### Error Message ``` [root@r100048367-91051506-l5wvj powerop]# cat /tmp/hs_err_pid1062.log...