Honglei Zhuang
Honglei Zhuang
Hi @WesleyHung, Is your customized BERT checkpoint a TF2 checkpoint? Did you also provide the json file (similar to the toy example [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/extension/testdata/bert_lite_config.json)) as the `bert_config_file` argument? As for more...
Hi @WesleyHung, Can you [inspect](https://www.tensorflow.org/guide/checkpoint#manually_inspecting_checkpoints) the checkpoint you generated and the checkpoint downloaded from [tf model garden](https://github.com/tensorflow/models/tree/master/official/nlp/bert) and see if they have the same variable names? If not, you may...
Hi @PeterAJansen and @azagniotov, We are working on a Keras update of TFR-BERT and hopefully that would resolve this issue. Thank you!
Hi @MiladAlshomary. You can directly use the sigmoid cross-entropy loss without modifying `listwise_inference`. As the [comment](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/extension/examples/tfrbert_example.py#L246) suggested, only listwise inference is currently supported for keras ranking network.
Hi @MiladAlshomary. Using cross-entropy loss essentially makes the model to be trained with a point-wise loss, i.e., during training, the loss of an example is only related to the example...
Hi @MiladAlshomary, Yes. Using one of the pairwise_*loss functions would allow the model to be trained in a pair-wise setting. It means the model would look at other examples in...
Hi @MiladAlshomary , Sorry for the late reply. Correct --- each document gets its score independent from the other during inference. It is definitely a very good question on whether...