Francesco Boscarino

Results 51 comments of Francesco Boscarino

Hi, yes, from raw I captured "N/A" as the value. I don't think Cisco will soon fix it. I understand the lib adheres to standard but should be able to...

Hi, this is the raw data: ```json { "Id": "Power", "Name": "Power", "@odata.context": "/redfish/v1/$metadata#Chassis/Members/$entity/Power", "PowerControl": { "PhysicalContext": "PowerSupply", "PowerLimit": { "LimitException": "NoAction" }, "PowerConsumedWatts": "152", "PowerMetric": { "IntervalInMin": 0.0833, "MinConsumedWatts":...

Hi, I suspect Cisco will never fix it so you can just ignore it. The only thing I suggest is not to panic if the payload cannot be parsed but...

panic: json: cannot unmarshal string into Go struct field .temp.Voltages.temp.UpperThresholdNonCritical of type float32 goroutine 1 [running]: main.main() /Users/francesco/go/test/main.go:39 +0x1f8 exit status 2 Went to directly call the endpoints and parse...

I managed to start it on kubernetes, not yet perfect deployment but it is working. It would help if Dockerfile entry point is changed from: ENTRYPOINT ["/app/rmqtt/rmqtt-bin/rmqttd"] to ENTRYPOINT ["sh",...

Hi, I create a statefulset with a custom image of rmqtt with only a start shell script to gather the ID for the cluster instance from pod index: ```sh #!/bin/sh...

Hi, I tried the perform_job in custom worker but it raises the error: ``` Traceback (most recent call last): File "/usr/local/Caskroom/miniconda/base/envs/meraki/lib/python3.10/site-packages/rq/worker.py", line 1633, in perform_job return_value = job.perform() File "/usr/local/Caskroom/miniconda/base/envs/meraki/lib/python3.10/site-packages/rq/job.py",...

Ji, neither using a CustomJob ``` Traceback (most recent call last): File "/usr/local/Caskroom/miniconda/base/envs/meraki/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/usr/local/Caskroom/miniconda/base/envs/meraki/lib/python3.10/threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "/Users/francesco/python/cloud-backup-server/main.py", line 56,...

Hi, I retested the CustomJob, now, like in your code, setting job_class in Worker definition, previously I put it on Queue definition. It seems work, just test a longer queue...

Hi, I noticed now that STREAMS (result streams) are not deleted once Job have been. Is it by design?