pdfGPT
pdfGPT copied to clipboard
Unable to pull a particular Docker layer of pdfchatter
Hi, I ran the docker pull command as suggested in the README but I get the following output.
docker pull registry.hf.space/bhaskartripathi-pdfchatter:latest
latest: Pulling from bhaskartripathi-pdfchatter
bd8f6a7501cc: Pull complete
44718e6d535d: Pull complete
efe9738af0cb: Pull complete
f37aabde37b8: Pull complete
3923d444ed05: Pull complete
1ecef690e281: Pull complete
48673bbfd34d: Pull complete
b761c288f4b0: Pull complete
4ea6ac43d369: Pull complete
aa9e20aea25a: Extracting [==================================================>] 99.49MB/99.49MB
63248b4e37e2: Download complete
5806ef4fec33: Download complete
ec89491cf0cd: Download complete
e662a12eee66: Download complete
46995db4b389: Download complete
7d67ad956d91: Download complete
b025d72cdd42: Download complete
0bbbfa67eeab: Download complete
66aa17d0dc7e: Download complete
failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header
Is there maybe something wrong with the aa9e20aea25a layer?
I think you might have tried this: https://huggingface.co/spaces/bhaskartripathi/pdfChatter?docker=true Looking at the error, i feel you can the following:
- Your_Key_Here="YOUR_VALUE_HERE" (This is not expired)
- When you save the image, use this instruction Docker save --output=C:\YOUR_PATH\my_docker_image.tar aa9e20aea25a(image id)
- When you load the image, try this : Docker load --input C:\YOUR_PATH\my_docker_image.tar
I think you might have tried this: https://huggingface.co/spaces/bhaskartripathi/pdfChatter?docker=true Looking at the error, i feel you can the following:
- Your_Key_Here="YOUR_VALUE_HERE" (This is not expired)
- When you save the image, use this instruction Docker save --output=C:\YOUR_PATH\my_docker_image.tar aa9e20aea25a(image id)
- When you load the image, try this : Docker load --input C:\YOUR_PATH\my_docker_image.tar
I am also getting the same error and am unable to run it locally. I have also tried installing all of the required libraries manually and executing the app.py and api.py manually (python3 app.py) and (lc-serve deploy local api), however, the API seems to not work properly. In particular, the endpoint seems to not be registered, as opening the Swagger docs of the api only shows the default API routes and not the ask_url nor the ask_file endpoint. Therefore, when asking a question using the app, only a detail, not found error is returned from the endpoint.
@deepankarm Plz help.
Hey @timothydillan, can you please share the following information to help in debugging?
- OS
- Python version
python --version - langchain-serve version
lc-serve -v - which directory are you executing the
lc-serve deploy ...command from? And what's content inside that directory?ls -lshould help.
Hey @timothydillan, can you please share the following information to help in debugging?
- OS
- Python version
python --version- langchain-serve version
lc-serve -v- which directory are you executing the
lc-serve deploy ...command from? And what's content inside that directory?ls -lshould help.
Hey @deepankarm and @bhaskatripathi,
I am on the latest version of macOS (Ventura 13.3.1). My Python version is 3.8.8, langchain-serve's version is on 0.0.22, and I am running lc-serve deploy on the project's (pdfGPT) directory. For now, as a temporary workaround, I was able to use the app properly by changing the langchain-serve implementation of the REST API to using FastAPI instead.
also getting this issue. I'm trying to set it up trough docker and it keeps on getting stuck on the same part. running popos! on thinkpad x395
Hi all,
Instead of running pdfGPT from the container, I did the following to just run it on my host:
- Installed the packages in
requirements.txt - Downloaded the
Universal Sentence Encoderlocally and replaced the code inapi.pyfile as instructed in the README - Added a hardcode of my API key inside the
load_openai_keyfunction (not bothered to export my api key everytime.) - Run
lc-serve deploy local apiin one terminal - Run
python3 app.pyin another terminal - Open a browser and go to http://127.0.0.1:7860
Hope this helps
@youngchanpark any idea about this warning when I run python3 app.py
2023-07-03 18:57:49.477996: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-03 18:57:49.863083: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-03 18:57:49.864075: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
`2023-07-03 18:57:50.859284: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT`