spark-corenlp
                                
                                
                                
                                    spark-corenlp copied to clipboard
                            
                            
                            
                        Stanford CoreNLP wrapper for Apache Spark
Where are the releases published to? Are they on bintray, and if so what is the plan for the sunset on May 1st? See: https://github.com/graphframes/graphframes/issues/384 https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/ @mengxr @slothspot @geoHeil @auroredea
I can see that there is a function defined for dependency parsing `depparse`. However I can't see if **Constituency Parsing** `parse` in the list of functions. Is there any way...
Hi, Standford corenlp has been udgraded to 4.0.0, and there are some changes of the classpath on its English models, so use 4.0.0 with spark-corenlp 0.4.0-spark2.4-scala2.11 will throw exceptions about...
``` import org.apache.spark.sql.functions._ import com.databricks.spark.corenlp.functions._ val input = Seq( (1, "Stanford University is located in California. It is a great university"), (2, "") ).toDF("id", "text") input.withColumn("sentiment", sentiment($"text")).show() ``` **Error: org.apache.spark.SparkException:...
With [Stanford NLP up and running in Python](https://stanfordnlp.github.io/stanfordnlp/) is there an intention of developing a Python/PySpark wrapper as well? And in the meantime, what would be the best way, say...
Hi, For those who are interested I am maintaining a [french version of this library.](https://framagit.org/interchu/spark-corenlp-french)
Hi there, a Scala newbie question. I'm trying to use this package. However, - How should it be added to `build.sbt`? `"com.databricks" % "spark-corpnlp" % "0.3.0-SNAPSHOT"` does not work ("not...
Could we have a spark wrapper for this function: https://github.com/stanfordnlp/CoreNLP/blob/5fdbfb209069276e95e1765093df9855d2cf2c38/src/edu/stanford/nlp/tagger/maxent/TTags.java#L288 to get the set of all possible POS tags, not that tags of a particular sentence?
In order to support other languages like Chinese, we should load different property based on the language. In this commit, I use Java property to support load Corenlp property file...