cobrix icon indicating copy to clipboard operation
cobrix copied to clipboard

DataBricks Unity Catalog and Cobrix

Open schwalldorf opened this issue 1 year ago • 11 comments

Hi guys,

thanks a lot for Cobrix. It's really great!

We're moving from Spark (Hadoop) on Premises to DataBricks in the Azure Cloud. And have encountered a strange problem when using the Unity Catalog.

Both the copybook and the data are stored in a managed Volume in Unity catalog. (Copybooks are simple, no nested fields.) If we do something as simple as

df = spark.read.format("cobol"). \
        option("copybook", "/Volumes/dev/raw/copybook.cob"). \
        load("/Volumes/dev/raw/my.data")

in a Python notebook or script, everything works fine if the code runs on a Compute cluster created by the same person who executes the code. If the code is run by Person A on a cluster created by person B, an "Insufficient Permissions" exception is raised. See

[INSUFFICIENT_PERMISSIONS] Insufficient privileges:
User does not have permission SELECT on any file. SQLSTATE: 42501
File <command-4018475800944646>, line 1
----> 1 cobol_import("bnktfili")
File /databricks/spark/python/pyspark/sql/connect/client/core.py:1874, in SparkConnectClient._handle_rpc_error(self, rpc_error)
   1871             info = error_details_pb2.ErrorInfo()
   1872             d.Unpack(info)
-> 1874             raise convert_exception(
   1875                 info,
   1876                 status.message,
   1877                 self._fetch_enriched_error(info),
   1878                 self._display_server_stack_trace(),
   1879             ) from None
   1881     raise SparkConnectGrpcException(status.message) from None
   1882 else:

Person A has full read permissions on any item in the catalog. The problem only arrises when using Cobrix. If we just load some CSV or parquet file form a Volume, no such problem occurs.

Any idea what goes on here or what we could do? Any help is much appreciated. Thanks a lot.

schwalldorf avatar Apr 09 '24 07:04 schwalldorf

Some more error message context:

2024-04-05 12:58:20,105 1607 ERROR _handle_rpc_error GRPC Error received
Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1485, in _execute_and_fetch_as_iterator
    for b in generator:
  File "/usr/lib/python3.10/_collections_abc.py", line 330, in __next__
    return self.send(None)
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 133, in send
    if not self._has_next():
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 194, in _has_next
    raise e
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 166, in _has_next
    self._current = self._call_iter(
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 280, in _call_iter
    raise e
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 263, in _call_iter
    return iter_fun()
  File "/databricks/spark/python/pyspark/sql/connect/client/reattach.py", line 167, in <lambda>
    lambda: next(self._iterator)  # type: ignore[arg-type]
  File "/databricks/python/lib/python3.10/site-packages/grpc/_channel.py", line 426, in __next__
    return self._next()
  File "/databricks/python/lib/python3.10/site-packages/grpc/_channel.py", line 826, in _next
    raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.INTERNAL
	details = "[INSUFFICIENT_PERMISSIONS] Insufficient privileges:
User does not have permission SELECT on any file. SQLSTATE: 42501"
	debug_error_string = "UNKNOWN:Error received from peer unix:/databricks/sparkconnect/grpc.sock {grpc_message:"[INSUFFICIENT_PERMISSIONS] Insufficient privileges:\nUser does not have permission SELECT on any file. SQLSTATE: 42501", grpc_status:13, created_time:"2024-04-05T12:58:20.104583977+00:00"}"

schwalldorf avatar Apr 09 '24 09:04 schwalldorf

Do you read the copybook and the data file via the RDD API? If so, this is the likely cause, as the RDD API is not supported by DataBricks in the Unity Catalog: https://learn.microsoft.com/en-us/azure/databricks/compute/access-mode-limitations#spark-api-limitations-for-unity-catalog-shared-access-mode

schwalldorf avatar Apr 09 '24 10:04 schwalldorf

@schwalldorf , Thanks for the interest in the project. Very glad you like it!

  • We use Hadoop client directly to load the copybook (https://github.com/AbsaOSS/cobrix/blob/7365fc1f9eafa646715b1a124958068dcefc4216/spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/HDFSUtils.scala#L89-L89)
  • We use RDD for variable length files for
    • building indexes (https://github.com/AbsaOSS/cobrix/blob/7365fc1f9eafa646715b1a124958068dcefc4216/spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/source/CobolRelation.scala#L76-L76)
    • for reading data files (https://github.com/AbsaOSS/cobrix/blob/7365fc1f9eafa646715b1a124958068dcefc4216/spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/source/scanners/CobolScanners.scala#L58-L58)

What is the Databrics-supported alternative for reading data files concurrently from Spark?

yruslan avatar Apr 10 '24 13:04 yruslan

Hi Ruslan,

thanks a lot for your reply. DataBricks supports both the DataFrame API and the Dataset API. I think the Dataset API should be closer to RDDs, but I'm not an expert in this. And I wouldn't know how to easily rewrite your code.

schwalldorf avatar Apr 12 '24 09:04 schwalldorf

Sure. Let's keep this issue open. This is something we might look at at some point. In the meantime somebody might suggest a workaround.

yruslan avatar Apr 17 '24 10:04 yruslan

Hi there, I am also encountering this issue described in #665. I'm looking forward to any updates or workarounds that might be available. Following this for any progress. Thanks!

meghanavemisetty avatar May 02 '24 15:05 meghanavemisetty

So far no progress on this since I don't have access to a Databricks instance at the moment. But this might change during the year, will keep in mind to fix it

yruslan avatar May 07 '24 11:05 yruslan

any luck with update on this?

saikumare-a avatar Aug 02 '24 08:08 saikumare-a

Not from our side since we are not yet using Databrix's volumes on Unity Catalog.

Has this issue been risen with Databricks support as well? If yes, please add a link to the issue.

A possible workaround is to use:

.option("enable_indexes", "true")

Let me know if it works

yruslan avatar Aug 02 '24 08:08 yruslan

Sure, will check and update. Thank you

saikumare-a avatar Aug 02 '24 09:08 saikumare-a

@schwalldorf, @saikumare-a, @meghanavemisetty, if you have a stack trace that show lines of Cobrix Scala code the error is happening, it would help a bit. This can at least confirm which API is used for file access at the location.

Also, you can try:

  • Loading an ASCII file via Cobrix - this uses a different API. If it works, this would give some additional information.
  • Are there differences between "record_format = F" and "record_format = V". Does access work for simple fixed length files, or only for variable-length.

yruslan avatar Aug 02 '24 09:08 yruslan