1click-hpc
1click-hpc copied to clipboard
grafana monitoring not working for static resources
when we use a non-zero minimum in cluster config for resources, they get alive at cluster launch. then this job-related check will never have a value of True: https://github.com/aws-samples/1click-hpc/blob/7a833d4a56dd42d28836b91938168cd4ca841e28/modules/40.install.monitoring.compute.sh#L59
because this must be run in the root context, the only chance to do it is in the prolog script to attach it to a job, so basically the plan would be to
- install the docker container anyway in post-install but do not start it
- use prolog and epilog to start and stop the container depending on user's choice to monitor or not
the problem is how to send a signal about the job to prolog and epilog since the custom user env variables are not sent, and the job comment is not sent. Because per slurm manuals, we should not perform scontrol from prolog; this will impair the scaling of the jobs similarly to the API calls (this is related to #34 )
Looking at the variables available at prolog/epilog time I only have 2 ideas so far:
- SLURM_PRIO_PROCESS Scheduling priority (nice value) at the time of submission. Available in SrunProlog, TaskProlog, SrunEpilog and TaskEpilog. We can
#SBATCH --nice 0or some sensible value to uniquely identify the intention then use the TaskProlog and TaskEpilog to start/stop the monitoring container - use some crafted slurm job name like
[GM] my job namethen pick and interpret this from SLURM_JOB_NAME Name of the job. Available in PrologSlurmctld, SrunProlog, TaskProlog, EpilogSlurmctld, SrunEpilog and TaskEpilog. Meaning also the use of TaskProlog and TaskEpilog to start/stop the monitoring container
I propose this line https://github.com/aws-samples/1click-hpc/blob/7a833d4a56dd42d28836b91938168cd4ca841e28/scripts/post.install.sh#L79 changed to
export monitoring_home="${SHARED_FS_DIR}/${monitoring_dir_name}/${stack_name}"
the reason is the stack name is also unique even if it is recycled, and we can use the SLURM_CLUSTER_NAME Name of the cluster executing the job inside the prolog scripts, thus avoiding undesired calls to scontrol.
I will eventually submit PR, now at the stage of brainstorming to get the best idea to work
(hm, the current value of SLURM_CLUSTER_NAME is always parallelcluster, so we might need to modify that to the parallelcluster-stack_name or something. I see this is inside slurm.conf. Therefore we need to modify it during post-install at headnode and compute node in case of static nodes, I suppose)...
https://github.com/aws/aws-parallelcluster/issues/4218