cube
cube copied to clipboard
Error to query external database in cube.py
Problem
I'm having problem when connecting to an external PostgreSQL database. It works fine in cube.js and the problem was while migrating to cube.py. Users password are stored externally and needs to get credentials using queries.
Version
0.35.3
cube.py
from cube import config
from sqlalchemy import create_engine, MetaData, Table
@config('check_sql_auth')
def check_sql_auth(req: dict, user_name: str, password: str) -> dict:
engine = create_engine('postgresql://user:password@host:port/database')
print(2)
metadata = MetaData()
print(3)
test_table = Table('users', metadata, autoload_with=engine)
print(4)
docker-compose logs
ubuntu:~/hello-world$ docker-compose -f docker-compose.yml up
Starting hello-world_cubestore_router_1 ... done
Starting hello-world_cubestore_worker_1_1 ... done
Starting hello-world_cubestore_worker_2_1 ... done
Starting hello-world_cube_api_1 ... done
Attaching to hello-world_cubestore_router_1, hello-world_cubestore_worker_2_1, hello-world_cubestore_worker_1_1, hello-world_cube_api_1
cubestore_router_1 | 2024-04-09 20:54:32,067 INFO [cubestored] <pid:1> Cube Store version 0.35.2
cubestore_router_1 | 2024-04-09 20:54:32,074 INFO [cubestore::http::status] <pid:1> Serving status probes at 0.0.0.0:3031
cubestore_router_1 | 2024-04-09 20:54:32,075 INFO [cubestore::metastore::rocks_fs] <pid:1> Using existing metastore in /cube/.cubestore/data/metastore
cubestore_router_1 | 2024-04-09 20:54:32,108 INFO [cubestore::cluster] <pid:1> Meta store port open on 0.0.0.0:9999
cubestore_router_1 | 2024-04-09 20:54:32,108 INFO [cubestore::mysql] <pid:1> MySQL port open on 0.0.0.0:3306
cubestore_router_1 | 2024-04-09 20:54:32,108 INFO [cubestore::http] <pid:1> Http Server is listening on 0.0.0.0:3030
cubestore_worker_1_1 | 2024-04-09 20:54:32,553 INFO [cubestored] <pid:1> Cube Store version 0.35.2
cubestore_worker_2_1 | 2024-04-09 20:54:32,524 INFO [cubestored] <pid:1> Cube Store version 0.35.2
cubestore_worker_2_1 | 2024-04-09 20:54:32,533 INFO [cubestore::cluster] <pid:1> Worker port open on 0.0.0.0:9001
cubestore_worker_1_1 | 2024-04-09 20:54:32,567 INFO [cubestore::cluster] <pid:1> Worker port open on 0.0.0.0:9001
cube_api_1 | ๐ Cube SQL (pg) is listening on 0.0.0.0:5432
cube_api_1 | ๐ Cube API server (0.35.3) is listening on 4000
cube_api_1 | 2
cube_api_1 | 3
cube_api_1 | ๐ Cube SQL (pg) is listening on 0.0.0.0:5432
cube_api_1 | ๐ Cube API server (0.35.3) is listening on 4000
cubestore_router_1 | 2024-04-09 20:54:47,108 INFO [cubestore::metastore::rocks_fs] <pid:1> Using existing cachestore in /cube/.cubestore/data/cachestore
cubestore_router_1 | 2024-04-09 20:55:47,137 INFO [cubestore::metastore::rocks_store] <pid:1> Persisting cachestore snapshot: done (720.044ยตs)
requirements.txt
SQLAlchemy==2.0.29
test_connection.py
>>> import psycopg2
>>> conn = psycopg2.connect( user ="user", password = "password", host = "0.0.0.0", port = 5432, database = "public")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "0.0.0.0", port 5432 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@leopedrassoli Thanks for asking!
I'm sorry if I miss anything here but where is this test_connection.py file coming from? And what is the issue, exactly?
My application stores users' passwords in an external database, so after executing check_sql_auth I need to read the users table to return the user's password when logging in. However, when I try to read this table in a Postgres database, the API simply stops execution, without printing any error warning.
The test_connection.py file shows an example of connecting to Cube, showing that when trying to login via SQL API, it doesn't work.
requirements.txt
SQLAlchemy==2.0.29
@leopedrassoli I believe you need to add a PostgreSQL driver to the requirements, like psycopg2: https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#dialect-postgresql
I believe that Python function silently crashes because of that, and it explains what you observe.
I updated the requirements.txt and installed the lib and it doesn't work, this is certainly not the problem.
@leopedrassoli Are you self-hosting Cube or using Cube Cloud?
self hosted