terraform-provider-huaweicloud icon indicating copy to clipboard operation
terraform-provider-huaweicloud copied to clipboard

Unable to create job when MRS cluster created via Terraform

Open thanapatkit opened this issue 11 months ago • 1 comments

I attempting to create a job after setting up an MRS Cluster using Terraform, I encountered the following error message:

Error: error execution MapReduce job: Bad request with: [POST https://mrs.ap-southeast-2.myhuaweicloud.com/v2/2786001ce8074c4c8e5a5ebb3499dc37/clusters/17cd0faa-c252-429f-a034-2d5ef3aed6aa/job-executions], request_id: 8f7ede12462caeab8c644e911bde93cf, error message: {"error_code":"0192","error_msg":"The current user does not exist on MRS Manager. Grant the user sufficient permissions on IAM and then perform IAM user synchronization on the Dashboard tab page."}

Upon further investigation, I discovered that this issue does not occur when I manually create an MRS Cluster via the HWC Console and then proceed to create the job using Terraform and it works properly.

Here are example variables:

**Cluster**
availability_zone    = "ap-southeast-2a"
mapreduce_version    = "MRS 3.1.0"
cluster_type         = "ANALYSIS"
component_list       = ["Hadoop", "Hive", "Tez", "Spark2x", "Flink", "ZooKeeper", "Ranger"]
vpc_id               = "xxxxxxx"
subnet_id            = "xxxxxxx"
master_nodes = [
  {
    flavor            = "m3.2xlarge.8.linux.bigdata"
    node_number       = 2
    root_volume_type  = "SATA"
    root_volume_size  = 500
    data_volume_count = 1
    data_volume_type  = "SATA"
    data_volume_size  = 650
    assigned_roles    = [      
      "OMSServer:1,2",
      "SlapdServer:1,2",
      "KerberosServer:1,2",
      "KerberosAdmin:1,2",
      "quorumpeer:1,2,3",
      "NameNode:2,3",
      "Zkfc:2,3",
      "JournalNode:1,2,3",
      "ResourceManager:2,3",
      "JobHistoryServer:3",
      "DBServer:1,3",
      "HttpFS:1,3",
      "TimelineServer:3",
      "RangerAdmin:1,2",
      "UserSync:2",
      "TagSync:2",
      "KerberosClient",
      "SlapdClient",
      "meta"
    ]
  }
]
manager_admin_pass = "xxxxxx"
node_admin_pass    = "xxxxx"
analysis_core_nodes = [
  {
    flavor            = "m3.2xlarge.8.linux.bigdata"
    node_number       = 3
    root_volume_type  = "SATA"
    root_volume_size  = 500
    data_volume_count = 1
    data_volume_type  = "SATA"
    data_volume_size  = 650
    assigned_roles    = []
  }
]
analysis_task_nodes = [
  {
    flavor            = "m3.2xlarge.8.linux.bigdata"
    node_number       = 2
    root_volume_type  = "SATA"
    root_volume_size  = 500
    data_volume_count = 1
    data_volume_type  = "SATA"
    data_volume_size  = 650
    assigned_roles    = []
  }
]
**Job**
name      = "spark-submit-01"
type           = "SparkSubmit"
program_path       = "obs://xxxxx/spark/driver_behavior.jar"
parameters         = "AK SK 1 obs://xxxx/input obs://xxxx/output"
program_parameters = { "--class" = "com.huawei.bigdata.spark.examples.DriverBehavior" }

Could you please assist me in resolving this issue? Any guidance or support you can provide would be greatly appreciated.

thanapatkit avatar Mar 08 '24 11:03 thanapatkit

@thanapatkit you can grant the user permissions from MRS console.

ShiChangkuo avatar Mar 19 '24 03:03 ShiChangkuo