filtering introduces small DC offset
I have noticed that for every value except rise_time = 0, the filtering process introduces a small DC offset in the LTC square wave making the wave no longer being centered around zero. I haven't delved into it to understand why it happens and in my system this doesn't affect the readability of the encoded signal so everything still works. However I was wondering if it's something that is possible to fix.
Likely because the LPF uses 8bit integer. Changing val to be a float probably fixes this.
https://github.com/x42/libltc/blob/84295cb4277d27ba17826c7f4f0a77b99bbd61b6/src/encoder.c#L54-L58
Actually val gets promoted to double already because of the tcf variable. So both the multiplication and accumulation of the filter get evaluated in double precision. However the result gets truncated to 8 bits when assigned back to val. I tested it and found out that rounding instead of truncating would be enough to solve the issue.
ltcsnd_sample_t val = SAMPLE_CENTER;
int m = (n+1)>>1;
for (i = 0 ; i < m ; i++) {
val = lround(val + tcf * (tgtval - val));
wave[n-i-1] = wave[i] = val;
Nice!
It should probably use float. double precision is overkill here.
Also I'm not sure if lround() is portable. src/ltc.c defines a rint() for MSVC and AVR. This might need a similar workaround.
eg.
val = floorf (.5f + val + tcf * (tgtval - val));