DbcParser icon indicating copy to clipboard operation
DbcParser copied to clipboard

Api Design targets 2.0 discussion

Open Uight opened this issue 5 months ago • 12 comments

This issue is to be used to discuss how the API of the DBC Parser should be changed to allow max usability across multiple use cases.

For me the use case is parse the dbc once and then use this data to send and receive can and can fd farmes. The only real requirement for this is that the receive stuff should be fast (fast enough to handle more than 1000 messages per second optimally sequential but i would also parallize it if thats not reachable) Send must not be that fast as normaly we would only send a few messages with cycle times of around 10ms or less. Maybe in the ballpark of 200 messages per second.

Additional requirements:

  1. I need the pack functions and unpack functions to work with byte arrays.
  2. I need extended mutliplexing to be supported.
  3. (Bonus: i dont really like properties that could be null in an API => see current custom properties)

For this i see some possibilities.

  1. provide the basic functions and let the user make it fast
  2. Change the api so it is "fast" right away
  3. Keep the api for the normal dbc stuff and move all packing and unpacking to a "seperate" lets say namespace where everything is optimized for speed.

From my test with the benchmarker for the packing and unpacking i know that you can not afford to calc any properties everytime. There are some properties relevant in this for receiving especially. Does the signal have a scaling (factor or offset) if not dont calc it. Does the message i receive contain multiplexed signals? And is a specific signal multiplexed.

To achieve this i would consider several options:

  1. convert the dbc to a packer with a function like DBC.CreatePacker(). The packer then holds all data internally in an optimized way and allows for functions like. Packer.UnpackMessage(uint ID, byte[] data, out dictionary<string, double> values);

Big contra in this is that you pretty much have all classes duplicated (just like currently with teh immutable stuff). This introduces more maintainace and probably some inconsistensies with every change. Pro: No api change. Probably the fasted way possible but i wouldnt say you need every grain of performance as then your probably using the wrong system anyway

  1. Keep the packer seperate and use the dbc in the packer. This would require the API of the dbc to change as dictionaries are needed for speed. Contra: Api changes, only reasonably fast Pro: no duplicated classes

Ill write more on this if more stuff gets to my mind ;)

Uight avatar Aug 26 '24 16:08 Uight