universql icon indicating copy to clipboard operation
universql copied to clipboard

Unable to Access Docker Container on Local Machine After Setup

Open soumilshah1995 opened this issue 1 year ago • 5 comments

Hi there,

I want to run the setup inside a Docker container and access it from my local machine. I am able to successfully query Snowflake tables using DuckDB inside the Docker container, but when I try to access the service locally (outside the container), I encounter issues.

Setup Script (setup_universql.sh):

#!/bin/bash

# Define variables
IMAGE="buremba/universql"
CONTAINER_NAME="universql_container"
PORT="8084"
SOURCE_DIR=$(pwd)  # Use the current directory
TARGET_DIR="/usr/app"
CACHE_DIR="/usr/app/cache"

# Ensure ACCOUNT is set via environment variable
if [[ -z "$ACCOUNT" ]]; then
  echo "Error: ACCOUNT environment variable is not set."
  echo "Please export ACCOUNT before running this script."
  exit 1
fi

# Ensure AWS environment variables are set
if [[ -z "$AWS_ACCESS_KEY_ID" || -z "$AWS_SECRET_ACCESS_KEY" || -z "$AWS_DEFAULT_REGION" ]]; then
  echo "Error: AWS environment variables are not set."
  echo "Please export AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION before running this script."
  exit 1
fi

# Pull the Docker image
echo "Pulling Docker image: $IMAGE"
docker pull $IMAGE

# Run the container in detached mode
echo "Running container..."
docker run -d --name $CONTAINER_NAME \
    -p 8084:8084 \
    --mount type=bind,source=$SOURCE_DIR,target=$TARGET_DIR \
    $IMAGE snowflake --account $ACCOUNT --cache-directory $CACHE_DIR

# Exec into the container and install AWS CLI
echo "Installing AWS CLI..."
docker exec -it $CONTAINER_NAME apt update
docker exec -it $CONTAINER_NAME apt install -y awscli

# Configure AWS CLI with environment variables
echo "Configuring AWS CLI..."
docker exec -it $CONTAINER_NAME aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID"
docker exec -it $CONTAINER_NAME aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY"
docker exec -it $CONTAINER_NAME aws configure set default.region "$AWS_DEFAULT_REGION"

echo "Setup complete!"

Setup Instructions:

export ACCOUNT="XXX"
export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY='XXX'
export AWS_DEFAULT_REGION="us-east-1"

chmod +x setup_universql.sh
./setup_universql.sh

Code that Works Inside the Container:

try:
    print("TRY 4 localhostcomputing.com")
    conn = snowflake.connector.connect(
        host='localhostcomputing.com',
        port='8084',
        user='XXX',
        password='XXX',
        account='XXX',
    )
except Exception as e:
    print("FAIL 4 localhostcomputing.com")

image

Issue:

When I try to access the service locally (outside the container), I have tried several configurations for the host in the code: localhost localhostcomputing.com Container name: universql-server 0.0.0.0 But none of these work. Can you assist with the correct steps or configuration to access the Docker container from my local machine?

soumilshah1995 avatar Nov 19 '24 17:11 soumilshah1995

Thanks for the report @soumilshah1995 ! This limitation makes it harder to run Universql in a container. One solution I think of using cloudflared to create a public tunnel, which should make Universql accessible in a public network including the Docker's host machine. I have a prototype that I'm hoping to ship this week.

buremba avatar Nov 21 '24 02:11 buremba

@buremba, I'm really excited about this! We'd love to have a Docker setup, as we plan to deploy on Kubernetes. The project is fantastic, with very well-written code. Looking forward to the prototype!

soumilshah1995 avatar Nov 21 '24 12:11 soumilshah1995

@soumilshah1995 That's a lot! I fixed the issues connecting Docker from your host machine. You should be able to connect localhostcomputing.com domain (which resolves to 127.0.0.1) from your host machine.

Additionally, I added optional --tunnel cloudflared arguments if you would like to create a tunnel that will make your service publicly accessible.

If you want to deploy to Kubernetes, you can assign --host 0.0.0.0, which will make Universql run in HTTP mode so that you can terminate the SSL in your load balancer. Are you using EKS or on-premise Kubernetes? I would love to hear more about your use case!

buremba avatar Nov 26 '24 01:11 buremba

is it main branch ?

can I use below setup ?

#!/bin/bash

# Define variables
IMAGE="buremba/universql"
CONTAINER_NAME="universql_container"
PORT="8084"
SOURCE_DIR=$(pwd)  # Use the current directory
TARGET_DIR="/usr/app"
CACHE_DIR="/usr/app/cache"

# Ensure ACCOUNT is set via environment variable
if [[ -z "$ACCOUNT" ]]; then
  echo "Error: ACCOUNT environment variable is not set."
  echo "Please export ACCOUNT before running this script."
  exit 1
fi

# Ensure AWS environment variables are set
if [[ -z "$AWS_ACCESS_KEY_ID" || -z "$AWS_SECRET_ACCESS_KEY" || -z "$AWS_DEFAULT_REGION" ]]; then
  echo "Error: AWS environment variables are not set."
  echo "Please export AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION before running this script."
  exit 1
fi

# Pull the Docker image
echo "Pulling Docker image: $IMAGE"
docker pull $IMAGE

# Run the container in detached mode
echo "Running container..."
docker run -d --name $CONTAINER_NAME \
    -p 8084:8084 \
    --mount type=bind,source=$SOURCE_DIR,target=$TARGET_DIR \
    $IMAGE snowflake --account $ACCOUNT --cache-directory $CACHE_DIR

# Exec into the container and install AWS CLI
echo "Installing AWS CLI..."
docker exec -it $CONTAINER_NAME apt update
docker exec -it $CONTAINER_NAME apt install -y awscli

# Configure AWS CLI with environment variables
echo "Configuring AWS CLI..."
docker exec -it $CONTAINER_NAME aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID"
docker exec -it $CONTAINER_NAME aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY"
docker exec -it $CONTAINER_NAME aws configure set default.region "$AWS_DEFAULT_REGION"

echo "Setup complete!"

soumilshah1995 avatar Nov 26 '24 12:11 soumilshah1995

It looks good to me. That will be enough to access Unversql from Kubernetes but you will need a public URL to connect from your local environment. If you have a public domain, you can pass SSL keys or you can optionally pass --tunnel cloudflared to use Cloudflare tunnel which will create the public URL for you.

buremba avatar Nov 27 '24 02:11 buremba