[SPARK-40488] Do not wrap exceptions thrown when datasource write fails
What changes were proposed in this pull request?
Exceptions thrown when writing to datasources (e.g. FileFormatWriter.write and WriteToDataSourceV2.writeWithV2), are wrapped with SparkException("Job aborted.") or SparkException("Writing job aborted").
This provides little extra information, but generates a long stacktrace and make the debugging process more difficult.
This change removes the wrapping, and the unused error class WRITING_JOB_ABORTED.
Why are the changes needed?
This is to simplify the stacktrace thrown when writing to datasource fails.
Does this PR introduce any user-facing change?
When exceptions are thrown in datasources writes, the wrapping with SparkExceptions will be removed and stacktraces will be simplified.
How was this patch tested?
Existing tests.
@cloud-fan, could you take a look?
thanks, merging to master!