Deng Cai

Results 17 comments of Deng Cai

Hi @nashid . I think it is doable. We already provided examples on how to process [dependency parses](https://github.com/jcyk/gtos/blob/master/translator/data.py) and [AMR graphs](https://github.com/jcyk/gtos/blob/master/generator/data.py). You may start from either of them and make...

@sevenights Hi, I treat Theano as a legacy in deep learning and am not familiar with it anymore. I would refer you to a better implementation [there](https://github.com/jcyk/greedyCWS). It is with...

@fliegenpilz357 Yes. I think you are right. The problem may be with the empty abstract_map(s). FYI, the preprocessing scripts for named entities are borrowed from https://github.com/sheng-z/stog. I will try to...

@mkartik Thx for pointing out it! However, Does the regenerating requires gold AMRs? if so, this cannot solve our problem here. Do you think it is possible to relax the...

@fliegenpilz357 hi, I think this example is too complicated. Do you mean change "Guofang Shen" to "Hua Chunying"? And the results show that "Guofang Shen" can be recognized but "Hua...

@fliegenpilz357 That is very interesting! "Guofang Shen" is in the abstract_map, but "Hua Chunying" is not. As pointed out by @mkartik, an entity can only be recognized only if a...

@fliegenpilz357 As suggested by @mkartik, one possible solution is to regenerate the 'text_anonymization_rules.json'. Another possible solution is to modify the preprocessing rules ( I guess we better remove this inconvenient...

I think I found the reason: The loop over eval_dataloader make `self.gradient_state.end_of_dataloader` to be `True` Then `self.gradient_state.sync_gradients` is always set to `True` according to https://github.com/huggingface/accelerate/blob/978dfc38ea91aae645d780d393d5bc0e33ac8306/src/accelerate/accelerator.py#L627 Do you feel this is...

I would like to share my current solution to this problem. ```python step = 0 while True: for batch in training_dataloader: step += 1 with accelerator.accumulate(model): inputs, targets = batch...