CrewAI-Studio icon indicating copy to clipboard operation
CrewAI-Studio copied to clipboard

CrewAI Studio UI keeps on spinning without specific err

Open IT-Forrest opened this issue 3 months ago • 1 comments

1. Main issue: following the README.md,

  • first run ./install_venv.sh in the ubuntu 22.04, next
  • run the UI with cmd ./run_venv.sh in vscode, and then
  • the UI shows up in the explorer as follows: Image

However, the UI remains black, and no further items are rendered in the explorer. Thanks for any suggestions or hints.

2. Test environment the backend is vllm with LL,ama3.1-8B, e.g. ./run_vllm.sh

unset https_proxy;
unset http_proxy;
unset HTTPS_PROXY;
unset HTTP_PROXY;

model_name="/mnt/weka/data/llm-d-models-pv/Meta-Llama-3.1-8B-Instruct/"
PORT=8200
PREFILLER_TP_SIZE=1

BASE_CMD="VLLM_LOGGING_LEVEL=DEBUG VLLM_USE_V1=1 vllm serve $model_name \
    --port $PORT \
    --enforce-eager \
    --served-model-name llama3 \
    --gpu-memory-utilization 0.8 \
    --tensor-parallel-size $PREFILLER_TP_SIZE \
    --max-num-batched-tokens 99999 2>&1 | tee decode_nonPD.log "

eval "$BASE_CMD"

The run_venv.sh is as follows:

#!/bin/bash

# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"

# Activate the virtual environment
source "$SCRIPT_DIR/venv/bin/activate"

cd "$SCRIPT_DIR"

# Set USER_AGENT to suppress warning
# export USER_AGENT="CrewAI-Studio/1.0"

# Enable debug logging
export STREAMLIT_LOGGER_LEVEL=debug

# Ensure Python can import modules from the app folder (e.g., `import db_utils`)
export PYTHONPATH="$SCRIPT_DIR/app:$SCRIPT_DIR:$PYTHONPATH"

# Add verbose output and run with additional debug flags
#ls -l crewai.db
export STREAMLIT_LOGGER_LEVEL=debug
export DEBUG_UI=1

streamlit run app/app.py \
  --server.headless True \
  --logger.level debug \
  --server.enableCORS False \
  --server.enableXsrfProtection False

3. Supplement info 3.1. the log shown by run_venv.sh is as follows:

root:/home/user/test/crew_ai/CrewAI-Studio# ./run_venv.sh 
2025-09-22 18:04:24.569 No singleton. Registering one.
2025-09-22 18:04:24.571 Watcher created for /home/user/test/crew_ai/CrewAI-Studio/.streamlit/config.toml
2025-09-22 18:04:24.574 Starting new event loop for server
2025-09-22 18:04:24.574 Starting server...
2025-09-22 18:04:24.574 Serving static content from /home/user/test/crew_ai/CrewAI-Studio/venv/lib/python3.10/site-packages/streamlit/static
2025-09-22 18:04:24.580 Server started on port 8501
2025-09-22 18:04:24.580 Runtime state: RuntimeState.INITIAL -> RuntimeState.NO_SESSIONS_CONNECTED

  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://172.2.4.6:8501
  External URL: http://192.5.5.2:8501

2025-09-22 18:04:24.903 Setting up signal handler
2025-09-22 18:04:34.385 Watcher created for /home/user/test/crew_ai/CrewAI-Studio/app/app.py
2025-09-22 18:04:34.385 AppSession initialized (id=eb121d07-c9f7-4430-9b81-9eb3256aa9e7)
2025-09-22 18:04:34.385 Created new session for client 132256311294464. Session ID: eb121d07-c9f7-4430-9b81-9eb3256aa9e7
2025-09-22 18:04:34.385 Runtime state: RuntimeState.NO_SESSIONS_CONNECTED -> RuntimeState.ONE_OR_MORE_SESSIONS_CONNECTED
2025-09-22 18:04:34.505 Received the following back message:
rerun_script {
  widget_states {
  }
  context_info {
    timezone: "America/Los_Angeles"
    timezone_offset: 420
    locale: "zh-CN"
    url: "http://localhost:8501/"
    is_embedded: false
    color_scheme: "dark"
  }
}

2025-09-22 18:04:34.506 Beginning script thread
2025-09-22 18:04:34.506 Running script RerunData(widget_states=, context_info=timezone: "America/Los_Angeles"
timezone_offset: 420
locale: "zh-CN"
url: "http://localhost:8501/"
is_embedded: false
color_scheme: "dark"
)
2025-09-22 18:04:34.506 Disconnecting files for session with ID eb121d07-c9f7-4430-9b81-9eb3256aa9e7
2025-09-22 18:04:34.506 Sessions still active: dict_keys([])
2025-09-22 18:04:34.506 Files: 0; Sessions with files: 0
2025-09-22 18:04:42,227 - 132256292980288 - user_agent.py-user_agent:11 - WARNING: USER_AGENT environment variable not set, consider setting it to identify your requests.
2025-09-22 18:04:42.269 Watcher created for /home/user/test/crew_ai/CrewAI-Studio/app/app.py
2025-09-22 18:04:42.270 AppSession initialized (id=4692cfd2-5ec5-41cb-ab74-960240e9a106)
2025-09-22 18:04:42.270 Created new session for client 132248877798288. Session ID: 4692cfd2-5ec5-41cb-ab74-960240e9a106
2025-09-22 18:04:42.270 Runtime state: RuntimeState.ONE_OR_MORE_SESSIONS_CONNECTED -> RuntimeState.ONE_OR_MORE_SESSIONS_CONNECTED
2025-09-22 18:04:42.490 Received the following back message:
rerun_script {
  widget_states {
  }
  context_info {
    timezone: "America/Los_Angeles"
    timezone_offset: 420
    locale: "zh-CN"
    url: "http://localhost:8501/"
    is_embedded: false
    color_scheme: "dark"
  }
}

2025-09-22 18:04:42.500 Beginning script thread
2025-09-22 18:04:42.506 Running script RerunData(widget_states=, context_info=timezone: "America/Los_Angeles"
timezone_offset: 420
locale: "zh-CN"
url: "http://localhost:8501/"
is_embedded: false
color_scheme: "dark"
)
2025-09-22 18:04:42.514 Disconnecting files for session with ID 4692cfd2-5ec5-41cb-ab74-960240e9a106
2025-09-22 18:04:42.514 Sessions still active: dict_keys([])
2025-09-22 18:04:42.514 Files: 0; Sessions with files: 0
2025-09-22 18:04:42.556 Adding media file 7826c0cc992ca5425f7150c9058cacb152b5c9a08ec46e538136f4c3
[UI-DEBUG] set_page_config done
[UI-DEBUG] set_page_config done
[UI-DEBUG] dotenv loaded
[UI-DEBUG] dotenv loaded
[UI-DEBUG] env secrets loaded
[UI-DEBUG] env secrets loaded
2025-09-22 18:04:42.770 MediaFileHandler: GET 7826c0cc992ca5425f7150c9058cacb152b5c9a08ec46e538136f4c3.png
2025-09-22 18:04:42.770 MediaFileHandler: Sending image/png file 7826c0cc992ca5425f7150c9058cacb152b5c9a08ec46e538136f4c3.png
2025-09-22 18:11:08.954 Received the following back message:
load_git_info: true

3.2. backend vllm works well with the following test

import os
from litellm import completion
import litellm as ll

# Allow configuring via env; default to local vLLM on 127.0.0.1:8200 (avoids proxy/IPv6 issues)
API_BASE = os.getenv("OPENAI_BASE_URL", "http://127.0.0.1:8200/v1")
API_KEY = os.getenv("OPENAI_API_KEY", "EMPTY")  # vLLM accepts any non-empty key by default

# Optional: turn on LiteLLM debug logs by setting LITELLM_DEBUG=1
if os.getenv("LITELLM_DEBUG") in ("1", "true", "True"):
    ll._turn_on_debug()

resp = completion(
    model="openai/llama3",  # ensure vLLM was started with --served-model-name llama3
    api_key=API_KEY,
    api_base=API_BASE,
    messages=[{"role": "user", "content": "What is Quantum Computing?"}],
    max_tokens=64,     # keep the first response small (prevents huge default)
    temperature=0,     # stable, quick response
    stream=False,      # simple, non-streaming response
    timeout=180,       # allow time for first-call warmup if needed
)
print(resp['choices'][0]['message']['content'])

IT-Forrest avatar Sep 23 '25 01:09 IT-Forrest

@strnad Thanks for any suggestions or comments

IT-Forrest avatar Sep 24 '25 17:09 IT-Forrest