rtl_433
rtl_433 copied to clipboard
Add test pattern and demo mode
This adds test pattern to each decoders r_device and a demo mode option (-D) to run all test pattern.
An idea to have fixed test pattern for each possible decoder output and a demo mode to run all these.
I.e. each decoder should have one known good pattern per data_make branch (say wind, rain, ... messages) to test and show-case all possible outputs.
Not really intended for built-in regression tests (would need many more patterns and edge-cases with an expected output) but rather as quick demo to users what output to expect from decoders and as a way to generate a show case of all our decoders.
Opinions, ideas, discussion?
Sounds like a good idea to me, especially as the data model is not really normalized across all decoders.
On the other hand, why not use the example data from rtl_433_tests? There is example data in JSON format already.
Yes, it would be a good idea to pull the demo data from the test repo. Just once with some python hack. The idea with the demo, opposed to the test repo, is to allow instant demo output in every format the user selects. E.g. the user wants to see what the output for a custom MQTT setup is for a given device.
Something I also want to later do is to output the help text from the source comments. It could be compiled in with some transformation step. Then -R 52 -D could tell you exactly what this device is, how it works, ... and some example outputs.
This would also help (the act of collecting patterns) when trying to fuzz rtl_433 (see #1062 ), as the demo inputs can be used as a seed to run the decoders.
Playing devil's advocate here: we could also collect the test/demo pattern in an external file, transform that into a header to include and use it as a library (e.g. global_pattern_lib[52]).
Not sure what fits best.
Merge at will.
I've gathered some data to seed my fuzzer. I will attach them here.
These are gathered by using the rtl_433_tests library and grepping the output/manually looking at output.
There are 2 zips, one has manually selected samples(correct), the other was created by doing some automated work(scraped).
They might be useful, because they already contain bitbuffers for a great deal of supported devices.
The scripts below are quickly hacked together, I've just listed them to give insight in my methods. Would be good to formalize this method to generate bitbuffers, maybe by building such functionality into rtl_433 or creating a cleaner python script.
Script used for scraping usable bitbuffers:
#!/usr/bin/env bash
# script that runs rtl_433 over all files (run in rtl_433_tests) and outputs result as json
echo running rtl_433
rtl_bin="/path/to/rtl_433"
find . -name "*cu8" -exec $rtl_bin -r {} -G 4 -F json -vv \;
pipe the output of that script to a file, split it into multiple files by using split -l input_file.
#!/usr/bin/env python3
"""
script to
import os
import subprocess
import shutil
dir = 'split/'
out_dir = 'split-filtered/'
def main():
for f in os.listdir(dir):
filepath = dir + f
result = subprocess.run(['/path/to/rtl_433', '-G', '4', '-y', '@'+filepath, '-F', 'json' ], capture_output=True)
if result.returncode == 0:
with open(filepath) as file:
line = file.readline()
print(line, end='')
print(result.stdout.decode('utf8'), end='')
shutil.copy(filepath, out_dir + f)
if __name__ == "__main__":
main()
This should give a directory with bitbuffers that generate useful output.
The last step is to collect these bitbuffers
#!/usr/bin/env python3
import json
import os
import subprocess
import collections
input_dir = '/home/rick/hobby/rtl/tests/split2-filtered/'
rtl_binary = '/home/rick/hobby/rtl/asan/build/src/rtl_433'
def main():
results = collections.defaultdict(list)
counter = 0
for filename in os.listdir(input_dir):
path = input_dir + filename
#print(filename)
result = subprocess.run([rtl_binary, '-G', '4', '-y', '@' + path, '-F', 'json'], capture_output=True)
if result.returncode == 0:
with open(path) as file:
line = file.readline().rstrip()
output = result.stdout.decode('utf8').rstrip()
outputs = output.splitlines()
for output_line in outputs:
data = json.loads(output_line)
model = data['model']
results[model].append(line)
# print(model, len(results[model]))
counter += 1
if counter % 100 == 0:
print()
total = 0
for model, data in results.items():
total += len(data)
print(model, len(data))
print('total:', total)
result_string = json.dumps(results)
with open('split2-data-points.json', 'w') as wr_file:
wr_file.write(result_string)
if __name__ == "__main__":
main()
This leaves you with a json file that has as key the names of the decoders, as value a list of bitbuffers that trigger the decoder.
This is an archive with rfraw samples for all the decoders with a known signal (key = rfraw signal, value = json output(s))
When I find the time I'll look into making a sort of testing/demo mode into the r_devices. That demo / testing mode should also be usable for testing, so should return a 1 when one of the decoders doesn't work as it should (anymore).
In that way the samples are right in the decoder and someone making a new decoder can directly add some testing to it to prevent future regressions.