Shanmugavel
Shanmugavel
facing TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' code used : from pyspark_ai import SparkAI spark_ai=SparkAI(verbose=True) spark_ai.activate() Also, Please clarify whether it is pyspark-ai or pyspark_ai to be imported....
### Willingness to contribute Yes. I can contribute a documentation fix independently. ### URL(s) with the issue https://mlflow.org/docs/latest/python_api/mlflow.sentence_transformers.html ### Description of proposal (what needs changing) Just like log_model for mlflow.transformers...
When executing the below command to write to the below path facing the issue val newPath = "jdbc:sqlite://tmp/my-sqlite.db" val tablename = "flight_info" val props = new java.util.Properties props.setProperty("driver", "org.sqlite.JDBC") csvFile.write.mode("overwrite").jdbc(newPath,...
The below piece of code when run on a spark 3 cluster # in python colName = "count" upperBound = 348113L numPartitions = 10 lowerBound = 0L _fails with File...
Query in the book: INSERT INTO partitioned_flights PARTITION (DEST_COUNTRY_NAME="UNITED STATES") SELECT count, ORIGIN_COUNTRY_NAME FROM flights WHERE DEST_COUNTRY_NAME='UNITED STATES' LIMIT 12 In Spark 3.0, The above query returns the below error...
Please find the below correction in the book under the Grouping section In the book: --in SQL %sql SELECT count(*) FROM dfTable GROUP BY InvoiceNo, CustomerId +---------+----------+-----+ |InvoiceNo|CustomerId|count| +---------+----------+-----+ |...
Please find the below correction in book under **sumDistinct** section **In the book:** --in SQL select sum(Quantity) from dfTable -- 29310 This query will result in 5176450 rows. **correct one:**...
Under working with JSON section in the book it is mentioned as "The equivalent in SQL would be jsonDF.selectExpr("json_tuple(jsonString, '$.myJSONKey.myJSONValue[1]') as column").show(2)" but it is not a SQL syntax at...