Younes DARRASSI
Younes DARRASSI
This what cause duplication `` def _get_latest_message(self, project_state, from_devika: bool): message_stack = self._get_message_stack(project_state) for message in reversed(message_stack): if message["from_devika"] == from_devika: return message I tried this and the duplication of...
#485 I tested this PR intensively and it's working like magic . I added this after commit knowledge `session.refresh(knowledge) # Reload the object from the session` To add_knowledge function and...
It's enhancing retrieval of memory from the local database , and my approche is combining Context from knowledge base and results from web browser and feed it to a LLM...
and i did not like this in llm.py `with concurrent.futures.ThreadPoolExecutor() as executor: future = executor.submit(model.inference, self.model_id, prompt)` because its consuming tokens Alot
> can you fetch the latest changes and try it again? yes i did , in fact i started all over because of merge conflict with others pending PR
> I just pushed the patch for config. and for ThreadPoolExecutor is for calculating time taking by model nothing to do with token usage. i got from new commit 1c8450b...
@ARajgor can you create new branche `dev` am working on RAG things
``` {data: 'Server Connected'} {message: 'create snake game in python', base_model: 'gemini-pro', project_name: 'test3', search_engine: 'DuckDuckGo'} {messages: {…}} messages : {from_devika: false, message: 'create snake game in python', timestamp: '2024-04-25...
unfortunately i can't
gemini work but , the logic you defined making it slow ``` Socket connected :: {'data': 'frontend connected!'} 24.04.25 13:48:43: root: INFO : SOCKET socket_response MESSAGE: {'data': 'Server Connected'} 24.04.25...