Dot
Dot copied to clipboard
Dot is typing ..
Hi, I wanted to give a try on my laptop. It has a i7-9750H CPU + a RTX 2070-QM and 16 GB of RAM, running windows 11 I've installed the latest GPU version to date (0.9.2) , the RPhi3 model has been downloaded. I restart and ask a simple hello. and the message "Dot is typing" appears.. forever. I've tried with Big dot, too. Same
Is there anything to make it work or to help in debugging ?
I've also tried the CPU version with no success.
In addition, the dark theme switch is not saved
And I had installed cuda_12.2.1_536.67_windows.exe and Visual Studio C++ Redistributable
Hi @BENETNATH,
I'm facing the same issue too I'm using Windows (GPU) executable of Dot version 0.9.2 I have an Intel Core i7-4710HQ CPU and a NVIDIA GeForce GTX 970M GPU My machine has got more than 500 GB of disk and 16 GB of RAM I think the tool is no longer actively maintained because the latest commit on main branch was held more than 4 months ago. Hope that developers will be active very soon.
Good luck
Same behavior here, installed everything, selected the LLM in settings, but no luck
Same problem here...
Processeur AMD Ryzen 9 5950X 16-Core Processor 3.40 GHz Mémoire RAM installée 64,0 Go Windows 11 Run on GPU
Same issue for me...
Same here. Disapointing 🙁
Got this on MacOS:
$ /Applications/Dot.app/Contents/MacOS/Dot
Python Path: /Applications/Dot.app/Contents/Resources/llm/python/bin/python3
IPC Setup Starting
2024-09-19 09:28:10.771 Dot[97782:2425513] WARNING: Secure coding is not enabled for restorable state! Enable secure coding by implementing NSApplicationDelegate.applicationSupportsSecureRestorableState: and returning YES.
Failed to create main window
IPC Message Received: Initiate Download
File already exists, no download needed.
File already exists, no download needed.
Current working directory: /Users/lauhub/Library/Application Support
Python Script Error: /Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
stderr: /Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
Python Script Error: /Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
stderr: /Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
Python Script Error: Traceback (most recent call last):
File "/Applications/Dot.app/Contents/Resources/llm/scripts/docdot.py", line 67, in <module>
vector_store = FAISS.load_local(os.path.join(folder_path, "Dot-data"), embeddings)
File "/Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain_community/vectorstores/faiss.py", line 1060, in load_local
index = faiss.read_index(
File "/Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/faiss/swigfaiss.py", line 9924, in read_index
return _swigfaiss.read_index(*args)
RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/impl/io.cpp:68: Error: 'f' failed: could not open /Users/lauhub/Documents/Dot-Data/Dot-data/index.faiss for reading: No such file or directory
stderr: Traceback (most recent call last):
File "/Applications/Dot.app/Contents/Resources/llm/scripts/docdot.py", line 67, in <module>
vector_store = FAISS.load_local(os.path.join(folder_path, "Dot-data"), embeddings)
File "/Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/langchain_community/vectorstores/faiss.py", line 1060, in load_local
index = faiss.read_index(
File "/Applications/Dot.app/Contents/Resources/llm/python/lib/python3.9/site-packages/faiss/swigfaiss.py", line 9924, in read_index
return _swigfaiss.read_index(*args)
RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/impl/io.cpp:68: Error: 'f' failed: could not open /Users/lauhub/Documents/Dot-Data/Dot-data/index.faiss for reading: No such file or directory
@alexpinel would you consider taking a look ? This tool seems really promising !
Same here:
I re-install twice. Check the gguf file. It's in C:\Users*****\OneDrive\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf
Why in Onedrive ? I have tried make it local. Don't change anything. Win11, 32 GO RAM, RTX 4070 16 Go VRAM here.
Same problem for me. Question : Big bot mode does not work either here. same for you?
Same, big bot is also broken
Same here. Path to the .gguf LLM is good, but nothing.
There's two issues here i think. #1 is the missing index file. This is what my llama has to say about it:
The 'index.faiss' is likely a pre-built index created by the FAISS library to speed up searching through large datasets. You would typically not need to manually create or download such files yourself as they are usually generated automatically when you run certain commands within your project using FAISS. If you're encountering issues, please check if there was any step missed during installation or setup of the library in your specific environment.
The other thing is that I believe that llama.cpp has undergone a change that makes it incompatible with "older" gguf models and maybe we should try rebuilding dot with an older version of llama.cpp. Not 100% sure about this.
Subject : Issue Loading Model with Dot
Hello Guys and Gall,
I am experiencing a same issue with Dot. When I attempt to interact with the program, it displays a message indicating that it is writing, but nothing happens. Here are the details :
Machine Configuration :
- Processor : AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx (4 cores, 8 logical processors)
- Graphics Card : AMD Radeon Vega 8 (2GB SDRAM)
- Operating System : Windows 11 (up to date)
Description of the Problem : When I launch the program, it seems to function correctly, but during interaction, it does not produce any response.
Error Messages Obtained in Debug Mode :
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'phi3'
llama_load_model_from_file: failed to load model
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: C:\Users\Administrateur\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf. Received error Failed to load model from file: C:\Users\Administrateur\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf (type=value_error)
UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
LangChainDeprecationWarning: Importing vector stores from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
Model Path : The model I am trying to load is located at (which is the default): C:\Users\Administrateur\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf
Actions Taken :
- I have verified that the model file exists and is not corrupted.
- I have attempted to update
langchainand other libraries, but the issue persists.
Thank you in advance for your assistance and guidance in resolving this issue.
I've managed to make Dot 0.9.2.0 work on Windows 11.
To do so, I've made two fixes.
1. Fixed imports in embeddings.py and docdot.py:
This is to address the obsolete import definitions.
UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
The files to be fixed are in the dot\resources\llm\scripts. You may want to fix bigdot.py as well. Below are the fixed imports:
embeddings.py
import sys
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, UnstructuredExcelLoader, TextLoader, UnstructuredPowerPointLoader, UnstructuredMarkdownLoader, Docx2txtLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
import os
import json
docdot.py
import sys
import json
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQA
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
import os
2. Downloaded and used TheBloke/Mistral-7B-Instruct-v0.2-GGUF
The Dot Readme file mentions this Mistral model. To use it, download it and select it in Settings.
This is to address the main issue:
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'phi3'
llama_load_model_from_file: failed to load model
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: C:\Users\Administrateur\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf. Received error Failed to load model from file: C:\Users\Administrateur\Documents\Dot-Data\Phi-3-mini-4k-instruct-q4.gguf (type=value_error)
This issue seems to be due to the incompatibility between the Ollama used by Dot and the Phi-3-mini-4k-instruct-q4.gguf model. It resembles issue #3974 with llama 0.1.32 .
Amazing, months and months since the first report, this problem still persists. Install the app on Windows 10, start it up, enter simple 'hi' prompt. Wait for nothing to happen for ever. Just the stupid progress graphic. Surely it can't be that difficult to fix this? Dot just doesn't work.
Hi!
First of all, my apologies for keeping you all in the dark for so long. I just completed my masters degree and finally have time to focus on Dot again.
It is a bit hard to specifically pinpoint the reason why the "Dot is typing..." error occurs, but I am quite sure it is because of the way the backend was constructed, which was riddled with compatibility issues. Back when I started building Dot, all Node js libraries were lacking most of the features that the python ones had, so I decided to build the entire app around llama-cpp-python. As you might imagine, making python work in a standalone fashion within an electron app was a weird choice at best...
To address these issues, I have rebuilt the entire backend of the app, which now uses node-llama-cpp for model inference. This should not only (hopefully!) solve this issue, but also come with a few nice features such as text streaming and increased file support. There are a few new bugs however, and some features I have had to remove, mainly the text-to-speech and speech recognition functionalities. I will try to add them back but I felt like they were more of a gimmick than anything.
This new version can be found on the release page of this GitHub repository. Please feel free to give it a try, and let me know how it works!
Thank you very much for all your help,
Alex
Thanks a lot @alexpinel and congrats on your degree's completion. I'm testing the latest release as I'm typing. I've downloaded the latest model, I'm pointing to this file in the settings. When I use the Big dot, It's now replying properly ! When I use the the doc dot, I have an error when no document is added. If I add a doc, it's adding it and works well !
Thanks a lot ! congrats !
Hi @alexpinel
Thank you for having replied back to all of us. First of all, congratulations for your master degree's completion. I was a student too a couple of years ago, and probably many of us in this mail thread and I know how it is also very important sometimes to be more focused on his studies and to work hard. Congratulations also for this amazing project you started to share with us. Don't worry about imperfections, that's the reality of many projects and it's also a good way to learn and to grow his skills. I will definitely give it a try and will keep you posted asap.
Good luck for the rest.
Jean Prince
On Mon, 9 Dec 2024 at 16:21, alexpinel @.***> wrote:
Hi!
First of all, my apologies for keeping you all in the dark for so long. I just completed my masters degree and finally have time to focus on Dot again.
It is a bit hard to specifically pinpoint the reason why the "Dot is typing..." error occurs, but I am quite sure it is because of the way the backend was constructed, which was riddled with compatibility issues. Back when I started building Dot, all Node js libraries were lacking most of the features that the python ones had, so I decided to build the entire app around llama-cpp-python. As you might imagine, making python work in a standalone fashion within an electron app was a weird choice at best...
To address these issues, I have rebuilt the entire backend of the app, which now uses node-llama-cpp for model inference. This should not only (hopefully!) solve this issue, but also come with a few nice features such as text streaming and increased file support. There are a few new bugs however, and some features I have had to remove, mainly the text-to-speech and speech recognition functionalities. I will try to add them back but I felt like they were more of a gimmick than anything.
This new version can be found on the release page of this GitHub repository. Please feel free to give it a try, and let me know how it works!
Thank you very much for all your help,
Alex
— Reply to this email directly, view it on GitHub https://github.com/alexpinel/Dot/issues/22#issuecomment-2528327470, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALSQBS4H2Z3R2DRSVN3TQKT2EWYOLAVCNFSM6AAAAABOAUNQWOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRYGMZDONBXGA . You are receiving this because you commented.Message ID: @.***>