devika icon indicating copy to clipboard operation
devika copied to clipboard

SyntaxError: Unexpected reserved word

Open katmai opened this issue 1 year ago • 6 comments

Describe the bug stuck on running the software

To Reproduce Steps to reproduce the behavior: cd ui bun install bun run dev

Expected behavior idk, never run the software. i'm expected the ui to spin up i guess

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context bun install v1.0.36 (40f61ebb)

4 packages installed [369.00ms] atlas@chia01:~/devika/ui$ bun run dev $ vite dev file:///home/atlas/devika/ui/node_modules/vite/bin/vite.js:7 await import('source-map-support').then((r) => r.default.install()) ^^^^^

SyntaxError: Unexpected reserved word at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18) at async link (internal/modules/esm/module_job.js:42:21) error: script "dev" exited with code 1

katmai avatar Apr 01 '24 08:04 katmai

which node version are you using??

ARajgor avatar Apr 03 '24 19:04 ARajgor

which node version are you using??

node -v v12.22.9

now that you asked that question, i'm loading node18 and trying again.

katmai avatar Apr 03 '24 20:04 katmai

it must have been something with the node, because now the compilation moved forward and 'bun run dev' works.

$ vite dev Forced re-optimization of dependencies

VITE v5.2.2 ready in 504 ms

➜ Local: http://localhost:3000/ ➜ Network: use --host to expose ➜ press h + enter to show help

but now i can't start devika.py.

 UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
  return torch._C._cuda_getDeviceCount() > 0
24.04.03 22:15:46: root: INFO   : BERT model loaded successfully.
Traceback (most recent call last):
  File "/home/atlas/devika/devika.py", line 26, in <module>
    from src.agents import Agent
  File "/home/atlas/devika/src/agents/__init__.py", line 1, in <module>
    from .agent import Agent
  File "/home/atlas/devika/src/agents/agent.py", line 1, in <module>
    from .planner import Planner
  File "/home/atlas/devika/src/agents/planner/__init__.py", line 1, in <module>
    from .planner import Planner
  File "/home/atlas/devika/src/agents/planner/planner.py", line 3, in <module>
    from src.llm import LLM
  File "/home/atlas/devika/src/llm/__init__.py", line 1, in <module>
    from .llm import LLM
  File "/home/atlas/devika/src/llm/llm.py", line 5, in <module>
    from .ollama_client import Ollama
  File "/home/atlas/devika/src/llm/ollama_client.py", line 1, in <module>
    import ollama
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ollama/__init__.py", line 1, in <module>
    from ollama._client import Client, AsyncClient
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ollama/_client.py", line 4, in <module>
    import httpx
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/__init__.py", line 2, in <module>
    from ._api import delete, get, head, options, patch, post, put, request, stream
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_api.py", line 6, in <module>
    from ._client import Client
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_client.py", line 32, in <module>
    from ._transports.default import AsyncHTTPTransport, HTTPTransport
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_transports/default.py", line 32, in <module>
    import httpcore
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/__init__.py", line 1, in <module>
    from ._api import request, stream
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_api.py", line 5, in <module>
    from ._sync.connection_pool import ConnectionPool
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_sync/__init__.py", line 1, in <module>
    from .connection import HTTPConnection
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 12, in <module>
    from .._synchronization import Lock
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_synchronization.py", line 11, in <module>
    import trio
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/trio/__init__.py", line 18, in <module>
    from ._core import (
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/trio/_core/__init__.py", line 27, in <module>
    from ._run import (
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/trio/_core/_run.py", line 2452, in <module>
    from ._io_epoll import EpollIOManager as TheIOManager
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 188, in <module>
    class EpollIOManager:
  File "/home/atlas/.pyenv/versions/3.11.3/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 189, in EpollIOManager
    _epoll = attr.ib(factory=select.epoll)
                             ^^^^^^^^^^^^
AttributeError: module 'select' has no attribute 'epoll'. Did you mean: 'poll'?

katmai avatar Apr 03 '24 20:04 katmai

okay, so the frontend is solved. also your backend problem seems like a ollama. can you try to run without ollama

ARajgor avatar Apr 03 '24 20:04 ARajgor

okay, so the frontend is solved. also your backend problem seems like a ollama. can you try to run without ollama

i did a test swap to python 3.10 and it came online. is there a recommended python version that we should be using?

$ python devika.py
24.04.03 22:26:39: root: INFO   : Initializing Devika...
24.04.03 22:26:39: root: INFO   : Initializing Prerequisites Jobs...
24.04.03 22:26:44: root: INFO   : Loading sentence-transformer BERT models...
/home/atlas/devika/.venv/lib/python3.10/site-packages/torch/cuda/__init__.py:141: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
  return torch._C._cuda_getDeviceCount() > 0
24.04.03 22:26:47: root: INFO   : BERT model loaded successfully.
24.04.03 22:26:49: root: INFO   : Ollama available
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
24.04.03 22:26:51: root: INFO   : Devika is up and running!

katmai avatar Apr 03 '24 20:04 katmai

which node version are you using??

node -v v12.22.9

now that you asked that question, i'm loading node18 and trying again.

switching from node 12.x to 18.x worked for me, linux mint 21.2

matiyin avatar Apr 04 '24 21:04 matiyin

for node use >=18 for python use >=3.10 and <3.12

ARajgor avatar Apr 06 '24 09:04 ARajgor