Mirko Kämpf
Mirko Kämpf
For testing the SparkKafka integration I used this test class: ``` import org.apache.wayang.api.DataQuantaBuilder; import org.apache.wayang.api.FilterDataQuantaBuilder; import org.apache.wayang.api.JavaPlanBuilder; import org.apache.wayang.api.ReduceByDataQuantaBuilder; import org.apache.wayang.basic.data.Tuple2; import org.apache.wayang.core.api.Configuration; import org.apache.wayang.core.api.WayangContext; import org.apache.wayang.core.function.FunctionDescriptor; import org.apache.wayang.core.optimizer.cardinality.DefaultCardinalityEstimator; import...
What is expected in this task? Can someone please line out what kind of documentation is expected? Is it just some inline docu of the code or rather a tutorial...
Same problem found in example 3 when running the KMeans algorithm.
My plan is to work on this during the next week. Best wishes, Mirko On Sat, Aug 19, 2017 at 6:21 AM, Luciano Resende wrote: > @kamir Could you please...
I suggest to start with implementing a Kafka-Source and a Kafka-Sink components, so that existing Apache Wayang applications can get input directly from Kafka topics and store results directly in...
I am back on this task working out a simple KafkaSource component, reading plain text messages from a Kafka cluster, comparable with the JavaFileSource, which can read file, line by...
Started merging the feature via branch mk-feature-2
In my forked repository, I mistakenly used the release branch for developing this feature. For now, I close the pull request and create another one, in a clear approach.
What is meant by "Handle different variants?"
Ok. I see. I will work on that next. As of now, I just have added the logging capability. But this was a good warm up exercise.