ecosystem icon indicating copy to clipboard operation
ecosystem copied to clipboard

Integration of TensorFlow with other open-source frameworks

Results 75 ecosystem issues
Sort by recently updated
recently updated
newest added

RayKo-MBP:spark RAY$ ./tests/integration/run.sh Stopping spark_worker_2 ... done Stopping spark_worker_1 ... done Stopping spark_master_1 ... done Removing spark_worker_2 ... done Removing spark_worker_1 ... done Removing spark_master_1 ... done Removing network spark_default...

Change spark-tensorflow-connector to be spark-3.0.0-preview2 Test: ~~~bash cd $PROJ_HOME/hadoop mvn clean install # build tensorflow-hadoop:1.10.0 and install into local repo cd $PROJ_HOME/spark/spark-tensorflow-connector mvn clean install ~~~

cla: yes

https://github.com/tensorflow/ecosystem/blob/12d65f29b29a1b5bc975d9c11745b6e67818a6ae/spark/spark-tensorflow-connector/src/main/scala/org/tensorflow/spark/datasources/tfrecords/serde/DefaultTfRecordRowEncoder.scala#L96 This line of code indicates that ArrayType(StringType, _) will be encoded to FeatureList, however, method `encodeFeatureList ` does not handle this case, and will throw exception. https://github.com/tensorflow/ecosystem/blob/12d65f29b29a1b5bc975d9c11745b6e67818a6ae/spark/spark-tensorflow-connector/src/main/scala/org/tensorflow/spark/datasources/tfrecords/serde/DefaultTfRecordRowEncoder.scala#L194 Is this...

I want to split my data evenly, so I add an column `index` to my dataframe, and I am pretty sure this column is added correctly. I printed some rows:...

`spark-tensorflow-connector` seems not working with Spark 2.0.X (The prerequisite in readme is [Spark 2.0 or later](https://github.com/tensorflow/ecosystem/tree/master/spark/spark-tensorflow-connector#prerequisites)) I got the following error when compiling with Spark 2.0.2 ``` $ mvn install...

Caused by: java.lang.NoClassDefFoundError: org/apache/spark/sql/catalyst/util/ArrayData$

When I do `df..write.format("tfrecords").option('writeLocality','local').save(path)` or `df..write.format("tfrecords").save(path)` the file is not getting created on the main Driver node. I dont see the folder as well. I am unable to _overwrite_ as...

Hi , I have an existence hdfs file in parquet format. When I write this dataframe in "tfrecords" format and later I read this new file( in "tfrecords" format) ,...

Most dataframe writer formats, have writing 'modes' where the user can select from `append`, `overwrite`, `ignore` and `error`. Currently, spark-tensorflow-connector silently ignores this parameter. **Here is a toy example, which...

how to read the tfrecord use python its always wrong,when i read FixedSequnceFeatures