Adam
Adam
If any user has modified the cluster name to something other than 'hdfs', the current health check command won't work. We should double check the value of fs.default.name/fs.defaultFS that (a)...
Right now the executor binary is packaged into a tarball that is extracted and run on each slave. It would be great if we could use Docker to package it...
There's a chance that task reconciliation could return a status update for a task that the scheduler (and its persistentState) either does not know about, or no longer considers to...
Dockerize it!
Looks like SchedulerConf is bloating the build with all of hadoop-commons just for a few classes. ``` $ grep -rn "org.apache.hadoop" src/ src/main/java/org/apache/mesos/hdfs/config/SchedulerConf.java:4:import org.apache.hadoop.conf.Configuration; src/main/java/org/apache/mesos/hdfs/config/SchedulerConf.java:5:import org.apache.hadoop.conf.Configured; src/main/java/org/apache/mesos/hdfs/config/SchedulerConf.java:6:import org.apache.hadoop.fs.Path; src/test/java/org/apache/mesos/hdfs/TestScheduler.java:4:import org.apache.hadoop.conf.Configuration;...
Can launch all JNs, or all NNs, or all DNs together if the master offers resources on multiple slaves. Otherwise, HDFS must wait for the next offer cycle before launching...
Currently the framework uses the hadoop/hdfs 2.4.0 dependencies in the pom, but the `build-hdfs` packaging script packages up hadoop-2.3.0-cdh5.1.0 from cloudera. 1. The pom dependency and packaged binary versions should...