Clay B.
Clay B.
This looks like a good typo fix to me. I don't see any instance of `slowTaskRelativeTresholds` elsewhere in the Hadoop code base.
Oh, also to explain; we override `/etc/security/namespace.init` to correct [Ubuntu Bug 1081323](https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1081323).
All requested changes made
Though frozen now, it may help folks to see we at Bloomberg did write an Oozie workflow and Chef recipes to deploy HDFS-DU in https://github.com/bloomberg/chef-bach/pull/932
Ah this is a VM specific issue for testing. I can not see how to apply `repxe-host.sh` to that yet...
@aespinosa Ah good idea; yes, my hang-up was this broke the idempotency of `tests/automated_install.sh` so it may be possible to envision it doing the necessary work rather than `cluster_assign_roles.rb`.
It seems like we should write a quick library function to see if HDFS is available for files as we have this idiom in many places in Chef-BACH.
HDFS will not be able to receive files when we first converge as we will have only namenodes but no datanodes. It is not until the second Chef pass we...
That sounds like a good plan! My only food-for-thought is let us remain modular so we can add/remove HDFSDU without affecting core `bcpc-hadoop` other than wrapper recipes specific to HDFSDU.
We should have moved to `locking_resource` now. If we can remove these functions too that'd be great! As I recall formatting the HDFS ZKFC znode may use these znode handling...