Support feeding from Spark
Today, the Hadoop integration tools for Vespa support Hadoop and Pig for feeding and querying Vespa. The Pig feeder is a thin wrapper around the Vespa HTTP client.
We should support feeding directly from Spark as well, to avoid Spark pipelines having to write to HDFS and run another Pig job for the actual feeding. Similarly to the Pig feeder, this could be implemented as a thin wrapper around the HTTP client.
@kkraune i dont see Hadoop integration anymore. do we want to have Spark Support. I would be interested in taking it up.
Hi, yes that would be a great addition! A good starting point is https://docs.vespa.ai/en/vespa-feed-client.html. Thanks!
Great. Will spend some time to investigate and see how we can design a sink in Spark
can I take this issue ?
Sure, thanks for contributing! https://github.com/vespa-engine/vespa/blob/master/CONTRIBUTING.md is a good place to start