BB2 search server related question
I'm a bit confused about how the search server works. I have a flask app with bb2 (blenderbot2_400M) and I have the search server running. I'm using Bing search API to get results from there. But, when I ask the bot a question, it searches the internet, but it doesn't return the answer I expect.
For example, I ask Who is Joe Biden? and the bb2 always tells me that he is the vice president. Do I need any additional configuration?
I'm using the create_agent_from_model_file method to create the agent from model. Passing these options:
opt = {
"interactive_mode": True,
"doc_chunk_split_mode":"word",
"degub":True,
"task":"blended_skill_talk:all",
"search_server":"http://0.0.0.0:8080",
}
Yeah, one of the drawbacks of the model was that it was not always using the information that was available to it. This was something addressed in the SeeKeR model, which is superior in utilizing retrieved information.
Hello, @AbreuY I met this problem. In my case, When this problem occurred, the server was down. I don't know why the server dies, but the server dies. If the server doesn't die and continues to operate normally, I think you should find the seeker mentioned above.
Hello @daje0601 and @mojtaba-komeili Thank you very much for your answer. Should I use seeker/r2c2_blenderbot_400M or seeker/seeker_dialogue_400M I am currently using this model seeker/r2c2_blenderbot_400M. But it is still the same.
@daje0601 Regarding the search server. It is running in the background. I have had no problems with that. It hasn't stopped working.
Ok I managed to get the bot working using seeker/seeker_dialogue_400M/model -o gen/seeker_dialogue --search-server <search_server>. Thanks.
But the answers it returns from the search server are sometimes inconsistent. I suppose that it will be necessary to train it so that it can have more up-to-date information. And the bot can give us more concise and correct answers? I would like to know your opinion. Thanks in advance.
I assume by "inconsistent" you mean varying with time. In that case, yes you might have to. But in case you want to keep your data consistent between your model validation. You can have a mock server that stores the outcome of past queries. Basically freezing your search engine in time. But generally, it is an open question how to capture temporary states in this model.
I suppose that it will be necessary to train it so that it can have more up-to-date information. And the bot can give us more concise and correct answers?
The goal of using a search engine to provide results to a model is that, in theory, it should not require training data on new, up-to-date information in order to discuss it. Certain problems with generations may be caused by other things, which are hard to diagnose without concrete examples
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.