Optimize document-similarity memory requirements
Currently top level mapredChildJavaOpts value (e.g. defined at document-similarity-oap-uberworkflow workflow level) is propagated deep down to all subworkflows and all PIG scripts.
Does it mean all the subworkflows and scripts have the same, pretty high, memory requirements?
In OpenAIRE CDH5 OCEAN cluster, after number of experiments, we were able to get down to 4g with top level mapredChildJavaOpts parameter value without affecting document-similarity stability. The thing is this is still causing performance bottleneck because YARN is able to delegate at most ~200 cores out of 608 cores in total due to the physical memory shortage.
If we could get down to e.g. 1638m for some of the subworkflows then all 608 cores could be utilized at this phase of processing.
The idea could be to:
- spot the most memory demanding subworkflows and explicitly propagate
mapredChildJavaOpts(as it is done now), where "most memory demanding" = "requiring more than default cluster configuration" - rely on default cluster memory related settings in all the other less memory demanding subworkflows (AFAIR
1gis default,1638mis on OpenAIRE CDH5 OCEAN cluster, not sure how is the spark cluster configured, you know probably better)