expose max_tasks_per_node
Currently, this is always set to 1. Might be useful to expose this as a parameter and/or set to max possible value by default.
https://github.com/Azure/aztk/blob/f7c1cb51729ce5347ebd7a732d5b735142c1332c/aztk/client.py#L87
Do we know what the impact of that would be on the spark scheduler? Probably nothing at this point, but I'm not sure about the value of increasing it since we'll be bottle necked / blocked by Spark scheduling on available resources, right?
Right now, the bottleneck could be the Batch service only allowing 1 task per node. We probably want to be bottle necked by the spark scheduler (not by the Batch service). The Spark scheduler should handle resource allocation between submitted applications.
The only thing that the Batch service scheduler does is execute the spark-submits and there isn't a reason to withhold sending spark-submits to the Spark scheduler.