databricks-sql-python icon indicating copy to clipboard operation
databricks-sql-python copied to clipboard

Databricks SQL Connector for Python

Results 122 databricks-sql-python issues
Sort by recently updated
recently updated
newest added

The following example works with `databricks-sql-connector` version `2.9.3`, but fails with version `3.0.1`: ```python import numpy as np import pandas as pd from sqlalchemy import create_engine sqlalchemy_connection_string = f"databricks://token:{access_token}@{host}?http_path={http_path}&catalog={catalog}&schema={schema}" engine...

As the title says, I'm having an issue where the SQL connector, upon executing .fetchall() returns 0 rows, while the Cluster that ran the query returns approx. 151,653 rows -...

bug

You include `openpyxl` as a requirement for this package, however openpyxl is not used by this library, as you can see from [this search](https://github.com/search?q=repo%3Adatabricks%2Fdatabricks-sql-python%20openpyxl&type=code). Please remove this requirement to reduce...

Arrow 15 has been released and it woudl be good to be able to use it. https://arrow.apache.org/blog/2024/01/21/15.0.0-release/

similar to: https://github.com/aws/amazon-redshift-python-driver/issues/220 While a `Cursor` attribute providing SQL State Code is not officially a part of [PEP 249: Python DB API 2.0 spec](https://peps.python.org/pep-0249/), it's a common enough convention and...

enhancement

Hello, When running the example from the doc: ```python from databricks import sql connection = sql.connect( server_hostname=HOSTNAME, http_path=HTTP_PATH, access_token=TOKEN) cursor = connection.cursor() cursor.execute('SELECT :param `p`, * FROM RANGE(10)', {"param": "foo"})...

Hello, I've been using this package to automate some SQL pulldowns of a fairly large dataset, but have realized after running it that the fetchmany_arrow() method is potentially overlapping its...

bug

The example in the `README.md` and the [documentation]() all print each row individually. In practice many users will try to convert the data to a pandas dataframe. Why not include...

enhancement

*Observation // Concern* `catalog` and `schema` initial values can be set when creating `Connection`. These values are _not_ applied in subsequent cursor metadata calls: `catalogs()`, `schemas()`, and `tables()`. These functions...

documentation

I'm trying to turn the [SqlAlchemy example](https://github.com/databricks/databricks-sql-python/blob/main/examples/sqlalchemy.py) from this repo on a new instance of Databricks. I'm getting the following error: > databricks.sql.exc.ServerOperationError: [UC_COMMAND_NOT_SUPPORTED] Create sample tables/views is not supported...