Muzamil Hussain Syed
Muzamil Hussain Syed
> Hi, > > I have another question regarding the dataset. Previously, I added personas for response generation. Apart from persona, how can we add the dialog history in the...
Hi @klshuster Thank you so much for the response. I have prepend the `topic` tag in the text similar to `your persona` tag just before the input query. Do you...
> if you have suitable training data, I would guess it'll learn something. Only one way to find out! I tried this approach. The model seems learning from the context...
Thanks for the response..! Which model would be suitable for this? I am currently using BB2_400M.
Thanks! When I train the BB2 model on my custom dataset, it trains perfectly on custom dialogues. However, the training on custom dataset effects the BB2 model general responses. For...
Would it be helpful if I add `blended_skill_task` task along with the new task `--task jsonfile --jsonfile-datapath dialogue_dialog_json_task.json` for training the model to preserve the knowledge?
Hi @klshuster , Thanks for always helping with your feedbacks. I have another question related to `search_server: --search_server 0.0.0.0:8080`. Is it possible to search from _documents_ instead of a _search...
Hi @klshuster Is there any length constraints for the input while adding personas and context given below..? `input = "your persona:{persona}\n..\n{user_input} {bot_output}\n...\n{actual input text}"`
How come 3B = 128 tokens while the 400m = 1024 tokens? I suppose the opposite makes sense.
Hi @klshuster , I understand how the token length is defined for the mentioned models. I am curious, few days back you told me that there are no length restrictions...