kube-prometheus
kube-prometheus copied to clipboard
PodMonitor does not work
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
k8s-app: prometheus-jvm
name: prometheus-jvm
namespace: monitoring
spec:
namespaceSelector:
any: true
podMetricsEndpoints:
- interval: 30s
port: jvm
selector:
matchLabels:
app-type: java
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jvm
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: jvm
app-type: java
spec:
hostname: jvm
containers:
- name: jvm
imagePullPolicy: Never
image: prod.registry.cmiot.chinamobile.com/market/jvm:v6
command:
- java
- -javaagent:./jmx_prometheus_javaagent-0.12.0.jar=8089:config.yaml
- -jar
- data-collector-0.0.1-SNAPSHOT.jar
ports:
- containerPort: 8089
name: jvm
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
prometheus: k8s-jvm
name: k8s-jvm
namespace: monitoring
spec:
alerting:
alertmanagers:
- name: alertmanager-main
namespace: monitoring
port: web
baseImage: tcloud.hub/prometheus/prometheus
replicas: 1
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
prometheus: jvm
role: alert-rules
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
podMonitorNamespaceSelector: {}
podMonitorSelector: {}
version: v2.11.0
Please follow the issue template. Could you please share the logs of prometheus and the prometheus-operator?
@brancz i did not see any error in prometheus and prometheus-operator
Can you share the generated Prometheus config? The one you see on /config
in the Prometheus UI.
@brancz
global:
scrape_interval: 1m
scrape_timeout: 10s
evaluation_interval: 1m
What version of the prometheus operator are you using?
@brancz v0.31.1
can you bump the log verbosity to debug
and share the prometheus operator logs with us?
does the --v=10 argument work in prometheus-operaror? or any thing else
It's --log-level=debug
: https://github.com/coreos/prometheus-operator/blob/c13bcedd9f9b5dff9840785c3b02d9066dd6b2a8/cmd/operator/main.go#L148
level=debug ts=2019-08-10T09:56:50.153216868Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:56:50.167048849Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:56:50.180393511Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:56:57.250305121Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947714 cur=40947754
level=debug ts=2019-08-10T09:56:57.250554664Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947714 cur=40947754
level=debug ts=2019-08-10T09:56:57.250581037Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=info ts=2019-08-10T09:56:57.250616921Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:56:57.250629045Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:56:57.250653751Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:56:57.254969977Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:56:57.267442205Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:57:14.974999308Z caller=operator.go:504 component=prometheusoperator msg="Prometheus updated" key=monitoring/k8s-jvm
level=info ts=2019-08-10T09:57:14.975069332Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:14.975087205Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:14.975114055Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:14.987427431Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.002847417Z caller=operator.go:1120 component=prometheusoperator msg="updating current Prometheus statefulset"
level=debug ts=2019-08-10T09:57:15.011858111Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947754 cur=40947800
level=debug ts=2019-08-10T09:57:15.012283696Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947754 cur=40947800
level=debug ts=2019-08-10T09:57:15.012313052Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=info ts=2019-08-10T09:57:15.012341313Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:15.012351646Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.012428595Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.016831071Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.037597475Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947800 cur=40947806
level=debug ts=2019-08-10T09:57:15.0376247Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=debug ts=2019-08-10T09:57:15.037727518Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=info ts=2019-08-10T09:57:15.037763524Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:15.037773786Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.037813672Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.037875945Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947800 cur=40947806
level=debug ts=2019-08-10T09:57:15.042340191Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:15.054057426Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:57:16.521772749Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947806 cur=40947816
level=debug ts=2019-08-10T09:57:16.521831326Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=info ts=2019-08-10T09:57:16.521859371Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:16.521870718Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:16.521866445Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947806 cur=40947816
level=debug ts=2019-08-10T09:57:16.521897717Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:16.528093684Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:16.539495005Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:57:25.764094039Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947816 cur=40947842
level=debug ts=2019-08-10T09:57:25.764148759Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=info ts=2019-08-10T09:57:25.764189122Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:25.764203322Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:25.76430424Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:25.766175951Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947816 cur=40947842
level=debug ts=2019-08-10T09:57:25.776819109Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:25.789371691Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:57:30.360456273Z caller=operator.go:412 component=alertmanageroperator msg="update handler" old=40947842 cur=40947881
level=debug ts=2019-08-10T09:57:30.360513306Z caller=operator.go:1014 component=prometheusoperator msg="update handler" old=40947842 cur=40947881
level=debug ts=2019-08-10T09:57:30.360542575Z caller=operator.go:1024 component=prometheusoperator msg="StatefulSet updated"
level=info ts=2019-08-10T09:57:30.360571895Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s-jvm
level=debug ts=2019-08-10T09:57:30.360583413Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:30.360619537Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:30.365463141Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s-jvm
level=debug ts=2019-08-10T09:57:30.376287114Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:58:51.376525097Z caller=operator.go:714 component=prometheusoperator msg="PodMonitor delete"
level=info ts=2019-08-10T09:58:51.37659978Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s
level=debug ts=2019-08-10T09:58:51.376614026Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.388065839Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules=monitoring-prometheus-k8s-rules.yaml namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.401138259Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.401191411Z caller=operator.go:1491 component=prometheusoperator msg="filtering namespaces to select ServiceMonitors from" namespaces=kube-system,monitoring,test-market-1,kube-public,test,udm,default,market,test-market-2 namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.40122511Z caller=operator.go:1506 component=prometheusoperator msg="selected ServiceMonitors" servicemonitors=monitoring/prometheus,monitoring/node-exporter,monitoring/kube-controller-manager,monitoring/grafana,monitoring/coredns,monitoring/kubelet,monitoring/kube-apiserver,monitoring/kube-scheduler,monitoring/kube-state-metrics,monitoring/prometheus-operator,monitoring/alertmanager namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.401236928Z caller=operator.go:1536 component=prometheusoperator msg="filtering namespaces to select PodMonitors from" namespaces=monitoring namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.401245524Z caller=operator.go:1549 component=prometheusoperator msg="selected PodMonitors" podmonitors= namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:51.473532313Z caller=operator.go:1453 component=prometheusoperator msg="updating Prometheus configuration secret skipped, no configuration change"
level=debug ts=2019-08-10T09:58:51.487464252Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=info ts=2019-08-10T09:58:51.487511295Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=default/k8s-jvm
level=debug ts=2019-08-10T09:58:51.487524107Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=default namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:51.487559684Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:51.490469174Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:51.5036815Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=debug ts=2019-08-10T09:58:53.532010225Z caller=operator.go:689 component=prometheusoperator msg="PodMonitor added"
level=info ts=2019-08-10T09:58:53.53208956Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=monitoring/k8s
level=debug ts=2019-08-10T09:58:53.532104971Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=monitoring namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.540432826Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules=monitoring-prometheus-k8s-rules.yaml namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.547894488Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.547933478Z caller=operator.go:1491 component=prometheusoperator msg="filtering namespaces to select ServiceMonitors from" namespaces=default,market,test-market-2,kube-public,test,udm,kube-system,monitoring,test-market-1 namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.547966549Z caller=operator.go:1506 component=prometheusoperator msg="selected ServiceMonitors" servicemonitors=monitoring/coredns,monitoring/kube-state-metrics,monitoring/kube-controller-manager,monitoring/kube-scheduler,monitoring/grafana,monitoring/kube-apiserver,monitoring/prometheus,monitoring/alertmanager,monitoring/node-exporter,monitoring/prometheus-operator,monitoring/kubelet namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.547978035Z caller=operator.go:1536 component=prometheusoperator msg="filtering namespaces to select PodMonitors from" namespaces=monitoring namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.547986761Z caller=operator.go:1549 component=prometheusoperator msg="selected PodMonitors" podmonitors= namespace=monitoring prometheus=k8s
level=debug ts=2019-08-10T09:58:53.572475118Z caller=operator.go:1453 component=prometheusoperator msg="updating Prometheus configuration secret skipped, no configuration change"
level=debug ts=2019-08-10T09:58:53.582981218Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
level=info ts=2019-08-10T09:58:53.583022121Z caller=operator.go:1050 component=prometheusoperator msg="sync prometheus" key=default/k8s-jvm
level=debug ts=2019-08-10T09:58:53.583033393Z caller=rules.go:154 component=prometheusoperator msg="selected RuleNamespaces" namespaces=default namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:53.583061608Z caller=rules.go:196 component=prometheusoperator msg="selected Rules" rules= namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:53.585537734Z caller=rules.go:71 component=prometheusoperator msg="no PrometheusRule changes" namespace=default prometheus=k8s-jvm
level=debug ts=2019-08-10T09:58:53.596433405Z caller=operator.go:1116 component=prometheusoperator msg="new statefulset generation inputs match current, skipping any actions"
It seems the namespace is selected appropriately, but the PodMonitor ends up not being selected for some reason. This will need a deeper look.
Any update on this issue?
Same problem, add podMonitor but not work, no changes in prometheus and not related logs on prometheus-operator. logs below:
level=debug ts=2019-12-12T02:48:25.237381365Z caller=operator.go:1542 component=prometheusoperator msg="filtering namespaces to select PodMonitors from" namespaces=devops namespace=devops prometheus=prometheus
I think I'm also hit by this. I am using the prometheus-operator helm chart (version 0.32.0).
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: fluentd-metrics
labels:
prometheus: kube-prometheus
spec:
podMetricsEndpoints:
- targetPort: 24231
path: /metrics
interval: 10s
namespaceSelector:
matchLabels:
namespace: kube-system
selector:
matchLabels:
k8s-app: fluentd-gcp
<snip>
prometheus:
prometheusSpec:
# Pick up all service monitors across all namespaces.
serviceMonitorNamespaceSelector:
any: true
serviceMonitorSelector:
any: true
# Pick up all pod monitors across all namespaces.
podMonitorNamespaceSelector:
any: true
podMonitorSelector:
any: true
<snip>
level=debug ts=2020-07-07T15:20:57.627124847Z caller=operator.go:1497 component=prometheusoperator msg="filtering namespaces to select ServiceMonitors from " namespaces=kube-node-lease,kube-public,kube-system,default namespace=default prometheus=kube-prometheus
level=debug ts=2020-07-07T15:20:57.627157281Z caller=operator.go:1512 component=prometheusoperator msg="selected ServiceMonitors" servicemonitors=default/kube-apiserver,default/kube-grafana,default/kube-kube-state-metrics,default/kube-node-exporter,default/kube-operator,default/kube-prometheus,default/kubelet,default/prometheus-relay-server namespace=default prometheus=kube-prometheus
level=debug ts=2020-07-07T15:20:57.627178351Z caller=operator.go:1542 component=prometheusoperator msg="filtering namespaces to select PodMonitors from" namespaces=default namespace=default prometheus=kube-prometheus
level=debug ts=2020-07-07T15:20:57.627194761Z caller=operator.go:1555 component=prometheusoperator msg="selected PodMonitors" podmonitors= namespace=default prometheus=kube-prometheus
I find it interesting that the namespace output for PodMonitor is different than ServiceMonitor, even though the namespace selectors are configured exactly the same and the selection code for the two are basically identical.
any: true
does not work in there selectors, if you want to select any with a label selector, you need to use the all selector, which in kubernetes is the empty one: {}
. The any: true
one is only possible within the ServiceMonitor and PodMonitor, as those directly transfer to the Prometheus paradigm which can only list individual namspaces or watch any, not a real kubernetes labelselector to select namespaces.
Thanks Frederic. Turns out my problem was caused by running prometheus-operator Helm chart 6.11.0 when PodMonitor support was added in 6.12.0. And Helm ignores all unknown values (podMonitorNamespaceSelector
and podMonitorSelector
) facepalm
Also cannot get this to work :(
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
annotations:
meta.helm.sh/release-name: prometheus-process-exporter
meta.helm.sh/release-namespace: telemetry
creationTimestamp: "2020-12-14T12:59:24Z"
generation: 3
labels:
app.kubernetes.io/instance: prometheus-process-exporter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-process-exporter
app.kubernetes.io/version: 1.16.0
helm.sh/chart: prometheus-process-exporter-0.1.0
name: prometheus-process-exporter
namespace: telemetry
resourceVersion: "1469976"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/telemetry/podmonitors/prometheus-process-exporter
uid: eb1a22fa-8229-433f-8880-67e7316c3262
spec:
namespaceSelector:
matchNames:
- telemetry
podMetricsEndpoints:
- path: /metrics
port: "9256"
selector:
matchLabels:
app.kubernetes.io/instance: prometheus-process-exporter
app.kubernetes.io/name: prometheus-process-exporter
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-12-14T23:18:00Z"
generateName: prometheus-process-exporter-
labels:
app.kubernetes.io/instance: prometheus-process-exporter
app.kubernetes.io/name: prometheus-process-exporter
<snip>
level=debug ts=2020-12-14T23:56:47.107595585Z caller=operator.go:1853 component=prometheusoperator msg="selected PodMonitors" podmonitors= namespace=telemetry prometheus=prometheus-operator-prometheus
## Prometheus-operator image
##
image:
repository: quay.io/coreos/prometheus-operator
tag: v0.38.1
## Prometheus-config-reloader image to use for config and rule reloading
##
prometheusConfigReloaderImage:
repository: quay.io/coreos/prometheus-config-reloader
tag: v0.38.1
@jhwbarlow what are the values of podMonitorNamespaceSelector
and podMonitorSelector
in your Prometheus CR?
@paulfantom here are the values:
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: prometheus-operator
So I guess the second of these is the problem? It is only going to consider pod monitors with that label and value? I don't know where this value is coming from, as in the vales.yaml
of the chart, it is also set to {}
. I'll have to do some more digging. I'll edit the live resource and see if it picks up the PM.
Thanks!
EDIT:
Just seen this value
## If true, a nil or {} value for prometheus.prometheusSpec.podMonitorSelector will cause the
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the podmonitors created
##
podMonitorSelectorNilUsesHelmValues: true
podMonitorSelector: {}
did it for me
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
This issue was closed because it has not had any activity in the last 120 days. Please reopen if you feel this is still valid.