go-dht
go-dht copied to clipboard
Doesn't work on Pi Zero (single core CPU)
This is a follow up for the discussion at https://github.com/google/periph/issues/225
Linux version is 4.14.79+ armv6l.
For Setpriority I had to run the executable as root to make the syscall work.
Success rate without Setpriority: 0/5 Success rate with Setpriority: 1/5
I guess the latter was just a lucky hit and Setpriority doesn't really help.
BTW, success rate reading from the IIO device driver is 9/10.
Go version is 1.12.1
I think the issue is that there is just not enough CPU power on the Pi Zero to do the periph.io busy read fast enough. On a Raspberry PI 3 a pin read took around .2 microsecond (us).
Busy read: https://github.com/MichaelS11/go-dht/blob/be44b9ee7fec8f81d57dea89c17d26961183266e/dhtNotWindows.go#L75-L90
Would you be able to do a bunch of ReadRetry in a loop then do a pprof, then post the result?
import "net/http"
import _ "net/http/pprof"
go http.ListenAndServe(":6060", nil)
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=60
Or we can write up some code to do a busy read in a loop and count how many pin reads get done in a set amount of time.
I'll try the latter.
Please note, I'm not sure if setting the output after changing the mode to input is correct here: https://github.com/MichaelS11/go-dht/blob/be44b9ee7fec8f81d57dea89c17d26961183266e/dhtNotWindows.go#L69-L71
My understanding is the In(gpio.PullUp is similar to Out(gpio.Low). So if there is an error with the PullUp, then the Out needs to be set to High so that the next loop is ready to go. The sensor waits for Low for so many microseconds before it starts sending. If your understanding is different, please help me understand what should be done instead at that point in code.
Ah never mind, I misread that part :-/ Still new to go and it's error handling... Sorry!
Number of reads in 10 ms ranges from 4000 to 7500 (so ~1.3us to 2.5us per read).
Are you sure that the Pi 3 can read in 0.2us (which would be 10 times faster)?
If I measure the time for every read and sort them I can observe that some reads takes 50 - 200 us. So this is very likely an issue of preemption and not of CPU performance. Here is the code I have used: https://gist.github.com/schlamar/b1f6e83725ec3b83346bf9861e7693a2 (Please note that read performance is 3-4 times worse when measuring every read durations so the numbers from above are without that calculation).
It might be possible to give a goroutine realtime priority by using LockOSThread and calling sched_set_scheduler with unix.Syscall. But I do not want invest too much time in your project anymore. I'm either going with the device tree overlay or with /dev/gpiochipX for my own project.
However, if you provide me a runnable example I can test that for you.
@schlamar Thank you very much for your time and information. It is very good information. It is basically what I was thinking. It does not have enough CPU cores (speed?) to do the busy read and do another needed operations. I would suspect that the long reads over 50 us are because of system context switches.
Do not think there is much else that can be done until the periph.io updates the GPIO calls. Do not think changing the priority is going to help because the Pi Zero is single core. If it was dual core or more then things might be different because it might be possible to hog (have priority) on one of the cores.
Please note that C based libraries like https://github.com/adafruit/Adafruit_Python_DHT and https://github.com/d2r2/go-dht work pretty reliable and all of them I checked use the sched_setscheduler syscall (https://github.com/adafruit/Adafruit_Python_DHT/blob/master/source/common_dht_read.c#L57 https://github.com/d2r2/go-dht/blob/master/gpio.h#L378).
I think just going to wait for https://github.com/google/periph/issues/225#issuecomment-479151127 or https://github.com/google/periph/pull/398 to be implemented.
For my application, I will only query for the temperature every 2-3 minutes. I got around the issues by kicking off a go routine that just did a Read every 10ish seconds and set a variable. This got around the issue of it erroring out sometimes. When the value is good I just ship it back across in a chan and store it. Then I only have to use the stored value. When there is an error I just drop it. Whenever I query the temperature is up-to-date enough for my application of it. Just a semi-work around for anyone interested.
@jmross14 Did you check out ReadBackground?
I didn't try it. Mostly because I was trying to experiment with using an actor model so I didn't have to use any locks. But I am sure it does it somewhat the same way. On the Pi Zero, Read Retry works for sure if I set the retries at 10 or more. But that was blocking too long for what I was doing. But I agree with your assessments above. I tried to use periph.io directly and the WaitForEdge calls just were not fast enough to pick up edges that happened below 100 microseconds.
If someone want to do a PR like this one https://github.com/MichaelS11/go-hx711/pull/2 it might help with this issue.