bdutil icon indicating copy to clipboard operation
bdutil copied to clipboard

Ambari extension with stack HDP 1.3

Open ggrasso opened this issue 10 years ago • 6 comments

I was able to deploy ambari_env.sh following the tutorial. However, because I need hadoop1 to run some previous job, I have tried to install the HDP 1.3 setting the variable AMBARI_STACK_VERSION='1.3' in ambari.conf.

bdutil got stuck at Invoking on master: ./install-ambari-components.sh Waiting on async 'ssh' jobs to finish. Might take a while....

Looking at install-ambari-components_deploy.stdout it says: Provisioning ambari cluster. { "status" : 400, "message" : "Unable to update configuration property with topology information. Component 'JOBTRACKER' is not mapped to any host group or is mapped to multiple groups."}

while install-ambari-components_deploy.stderr shows a loop printing

ambari_wait status: curl: no URL specified! curl: try 'curl --help' or 'curl --manual' for more information

Is stack 1.3 supposed to work? I was thinking that also the variable AMBARI_SERVICES needs to be proper adjusted for 1.3, and if so, what values it takes?

ggrasso avatar Jan 29 '15 13:01 ggrasso

Only the default Ambari 1.7.0 + HDP 2.2 stack is officially supported and certified at the moment. However, the dependency on Hadoop 1 for some use cases is indeed a concern; we've had some less-thoroughly-tested but tentative success using ambari_manual_env.sh, where bdutil only installs Ambari without then automating the installation of a particular HDP stack. You should be able to use that, and then log in with username/password admin:admin once bdutil is finished to manually perform your 1.3 installation, remembering to manually add the config keys for enabling the GCS connector.

dennishuo avatar Jan 30 '15 18:01 dennishuo

@GGrasso - The full provisioning (ambari_env.sh) was built around Apache Ambari's Blueprint Recommendations which were built for HDP 2.1+. I've tested with HDP 2.1 & 2.2.

I'd support @dennishuo 's recommendation to use 'ambari_manual_env.sh' and then install from there.

seanorama avatar Jan 31 '15 12:01 seanorama

After some digging, I see that the HDP 1.3 Stack supports the services.

I'm running bdutil now with the following set in ambari.conf and will report back:

AMBARI_SERVICES="GANGLIA HBASE HDFS HIVE MAPREDUCE NAGIOS OOZIE PIG SQOOP ZOOKEEPER"

http --session-read-only temp http://localhost:8080/api/v1/stacks/HDP/versions/1.3/services | jq '.items[].StackServices.service_name'
"GANGLIA"
"HBASE"
"HDFS"
"HIVE"
"MAPREDUCE"
"NAGIOS"
"OOZIE"
"PIG"
"SQOOP"
"ZOOKEEPER"

seanorama avatar Jan 31 '15 12:01 seanorama

It nearly works. Fails when starting services due to mapreduce.history.server.http.address being set to the wrong host. After I manually corrected the configuration, all services started successfully.

But, as they are triggered only after a successful build, the addition of the GCS connector and a few other settings is missing.

Note that I'm not sure of any official support for HDP 1.3 from Ambari Blueprint Recommendations but will ask. It likely would have succeeded had I used a static blueprint or manually used Ambari.

seanorama avatar Jan 31 '15 13:01 seanorama

@seanorama @dennishuo thank you. I will try the ambari_manual. On this respect, in the doc you mention that these steps will not be taken:

  • initialization of HDFS /user directories (Check the function initialize_hdfs_dirs in ../../libexec/hadoop_helpers.sh)
  • installation of the GCS connector. (Check ./install_gcs_connector_on_ambari.sh & ./update_ambari_config.sh)

It's not totally clear to me whether I shall run the scripts manually (on every machine?), or create a custom env file to pass to bdutil along with ambari_manual. Could you comment on these?

ggrasso avatar Feb 01 '15 18:02 ggrasso

There are a couple of issues with those scripts run on a manually deployed cluster. We assume the name of the cluster and assume all nodes have a hadoop-client directory, that they don't necessarily. I just opened an issue specifically tracking the post ambari_manual_env.sh configuration issue.

As for manually running vs a new env file, you could look at bdutil run_command_group --target [ master | all ] as an in between, but in any case it isn't quite working at present anyways.

pmkc avatar Feb 03 '15 02:02 pmkc