nareshmungpara
nareshmungpara
When we do segmentation and then we pass that segmented sentences to auto punctuation in that the input text length to segmentation is not same as the combined output length...
``` def correct_sentence__test(): segmenter = DeepSegment('en') chunk_size=200 # read 600 character at a time for txt_file in os.listdir('input'): output_data = '' f = open('input/{}'.format(txt_file),'r') data = f.read() seg_data_arr = segmenter.segment_long(data)...
Debugging the code I found today is segmentation is working fine but still it is not working the same as it is shown in the demo. Input Text:- > hello...
> Hi, the input shape is 196 * 720 * 1280. It looks you are feeding 196 images at once. It's obvious that you don't have an enough memory to...
Update:- - I tried to freeze the model and then run code for `.pb ` file and now I am getting following error ` Resource exhausted: OOM when allocating tensor...