Necromuncher

Results 11 comments of Necromuncher

Is there a possibility to define these criteria on the cli? If not, where sould I look in the code in order to integrate such capabilities?

I'm sorry in advance if this sounds stupid but after generating a new configmap and using it with the cli like this: ``` oc create configmap m1-c4 --from-literal=spark.executor.memory=1g --from-literal=spark.executor.cores=4 oshinko...

I will! Thank you @tmckayus very much for this clarification! I can't imagine how much wasted time is saved with this simple change in configmap creation method. Maybe this should...

**This post is a problem I am facing while using Oshinko v0.4.6 - if this is solved at a current unreleased state, I'll be glad to see the solution** Regarding...

Can you share the job you ran so I could try and replicate your results? (This far I only ran a spark-shell comand on a pod terminal and the logs...

@crobby I executed this command on one of my worker pods and the log output was this: ``` 18/04/25 06:02:16 INFO ExecutorRunner: Launch command: "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=41402" "org.apache.spark.executor.CoarseGrainedExecutorBackend"...

My output for `oc get configmap -o yaml` is: ``` apiVersion: v1 items: - apiVersion: v1 data: w-c2m2: | spark.executor.cores 2 spark.executor.memory 2g kind: ConfigMap metadata: creationTimestamp: 2018-04-25T06:20:33Z name: c2m2...

My c2m2 is a directory. `$ tree oshinko-spark.conf.d/` ``` oshinko-spark.conf.d/ ├── c2m2 │   └── w-c2m2 └── c4m1 └── w-c4m1 ``` Maybe it can't operate if `spark-defaults.conf` isn't in the directory?

After changing to the spark-defaults.conf standard, I did some tests and discovered something interesting. I ran the sparkPi job on the default cluster (2 cores and 1g RAM per executor,...