Nelson

Results 15 comments of Nelson

I do have the same problem, did you found a solution @skills-up ?

+1, any updates on this ? EDIT: For me, RabbitMQ was running on an old server. Migrating it on a more recent server with more CPU fixes the problem (3k...

We had this problem when RabbitMQ was overload (10'000 msg/s in the same queue). One of the solution was to create 10 queues (distributed on different server) with 2 consummers...

yes and no. Some timeouts are hard coded (10s) in the AMQP client. Create more queues and use the routing keys to dispatch them

Try to increase the parallelism (more partitions)

Because of "data.repartition(1)" Your are removing all the partitions ... You should avoid such things if you don't control the size of the data you compute.

The repartition(1) is useless. Try removing it.

After the repartition(1) you are doing ".mode(SaveMode.Append)". It doesn't work ?

In my opinion it is not the role of Spark Streaming to control the number of output. There is some tricky solutions (saveAsHadoopFile is one) but you should probably avoid...