databricks-sql-python
databricks-sql-python copied to clipboard
Databricks SQL Connector for Python
**Problem Summary:** When using Azure AD Service Principal, a client-id and client-secret are used to obtain a short-lived token. The token may be used in the connection string: ``` uri...
## Description I encountered this very consistent error when using the `dbt-databricks` adapter when upgrading from `dbt-databricks==1.5.5` to `dbt-databricks==1.6.6`. For a `dbt run` with sufficiently enough models, this error always...
SQLAlchemy version: 1.4.22 databricks-sql-connector version: 2.9.3 The following code throws the error below: ``` from sqlalchemy import create_engine [...] conn = create_engine( f"databricks://token:{token}@{host}?http_path={path}" "&catalog={catalog}&schema={schema}" ) test = pd.read_sql_table(table, con=conn) ```...
I know that using sequence as parameters works using the legacy inline parameters. However, as [mentioned there](https://github.com/databricks/databricks-sql-python/blob/main/docs/parameters.md#when-to-use-inline-parameters), it doesn't work with the native parameters. Is this somewhere on the roadmap...
Issue Description: Inserting Data into Databricks via the databricks-sql-python library (Leveraging SQLALCHEMY) Error Message: sql ``` DatabaseError: (databricks.sql.exc.ServerOperationError) Column id is not specified in INSERT [SQL: INSERT INTO model_integrated (name)...
Hello, I'm trying to create an engine connection which could manage multiple catalogs at once ``` connection_uri = "databricks://token:XXXXXX@DB_HOST?http_path=/sql/1.0/warehouses/DWH_ID" engine = create_engine(connection_uri, future=True) meta_inspector = inspect(self.engine) ``` Later when calling...
For cases where we might want a dedicated proxy for the Databricks SQL connector, but not override the global proxy in an application ~I have a branch that implements this,...
Hi, First of all thank you for the awesome work so far. This really improves the development experience for Databricks and finally allows tools such as alembic to work with...
I cant get autogenerated identity columns to work. The table is created just fine, but sqlalchelmy cant insert any rows Error: > sqlalchemy.orm.exc.FlushError: Instance has a NULL identity key. If...
It appears that default values aren't supported? https://github.com/databricks/databricks-sql-python/blob/3f6834c9797503132cb0d1b9b770acc36cd22d42/src/databricks/sqlalchemy/base.py#L64 Trying to create a schema with the below `server_default` gives a spark error: ```python insert_timestamp: Mapped[datetime] = mapped_column( sa.DateTime(timezone=True), init=False, nullable=False, server_default=sa.func.current_timestamp(),...