spark icon indicating copy to clipboard operation
spark copied to clipboard

[SPARK-49723][SQL] Add Variant metrics to the JSON File Scan node

Open harshmotw-db opened this issue 5 months ago • 1 comments

What changes were proposed in this pull request?

This pull request adds the following metrics to JSON file scan nodes to collect metrics related to variants being constructed as part of the scan:

variant top-level - total count
variant top-level - total byte size
variant top-level - total number of paths
variant top-level - total number of scalar values
variant top-level - max depth
variant nested - total count
variant nested - total byte size
variant nested - total number of paths
variant nested - total number of scalar values
variant nested - max depth

Top level and nested variant metrics are separated as they can have different usage patterns. singleVariantColumn scans and columns in user-provided schema scans where the column type is a top level variant (not variant nested in a struct/array/map) are considered to be top level variants while variants nested in other data types are considered to be nested variants.

Why are the changes needed?

This change allows users to collect metrics on variant usage to better monitor their data/workloads.

Does this PR introduce any user-facing change?

Users will now be able to see variant metrics in JSON scan nodes which were not available earlier.

How was this patch tested?

Comprehensive unit tests in VariantEndToEndSuite.scala

Was this patch authored or co-authored using generative AI tooling?

Yes, got some help related to scala syntax. Generated by: ChatGPT 4o, GitHub CoPilot.

harshmotw-db avatar Sep 19 '24 21:09 harshmotw-db