dcadec
dcadec copied to clipboard
[Question] differences between dcadec and libdca (videolan)
Hi
what is the difference between dcadec (this project) and libdca from videolan?
greetings
These are unrelated projects. Main difference is that libdca can decode lossy DTS core only without extensions, while libdcadec decodes all core extensions (XCH, XXCH, XBR, X96) and lossless extension (XLL).
ok. thanks
one thing.
libdca(videolan) create a 'dcadec' executable like you project. this create conflict between both. you can rename that executable with other name?
is for avoid break ~~ffmpeg~~ mplayer with libdca support
or this is a work for package maintainers?
greetings
edited
ffmpeg doesn't have libdca support, it only has libdcadec support (this project). FWIW, libdca doesn't really have a reason to exist anymore. It hasn't been updated in years, and has an extremely limited feature set.
ow, yes. is a dependecy for mplayer and other software installed by mi distro, not for ffmpeg
i misread the list of dependencies
sorry :S
edited
Hello,
How easy is to port existing code using libdca
to libdcadec
? (eg https://github.com/athoik/gst-plugin-dtsdownmix/blob/master/gstdtsdownmix.c)
Can you add Pseudocode example
in docs something like libdca
has:
Pseudocode example
------------------
dca_state_t * state = dca_init (mm_accel());
loop on input bytes:
if at least 14 bytes in the buffer:
bytes_to_get = dca_syncinfo (...)
if bytes_to_get == 0:
goto loop to keep looking for sync point
else
get rest of bytes
dca_frame (state, buf, ...)
[dca_dynrng (state, ...); this is only optional]
for i = 1 ... dca_blocks_num():
dca_block (state)
dca_samples (state)
convert samples to integer and queue to soundcard
Use the gst libav plugin?
Well having a standalone plugin for dts it's much better for embedded (Enigma2) machines, libav is huge when you have only few MB of flash available.
Moreover the custom gst plugin downmixes directly to LPCM without further convertions, while libav involves audioconvert making high cpu usage on embedded machines.
Finally there is some special code that makes passthough available and with libav there is no such an option.
You could check dcadec.c for an example of how to use the API. Also ffmpeg's libavcodec/libdcadec.c
libavcodec/libdcadec.c helps a little bit.
I guess the following "psedocode" is a good start.
struct dcadec_context *ctx;
uint8_t *input;
int input_size;
int **samples, nsamples, channel_mask, sample_rate, bits_per_sample, profile;
ctx = dcadec_context_create(0);
dcadec_context_parse(ctx, input, input_size)
dcadec_context_filter(ctx, &samples, &nsamples, &channel_mask, &sample_rate, &bits_per_sample, &profile)
...convert samples to integer and queue to soundcard...
dcadec_context_destroy(ctx);
But how I can downmix to strereo? I cannot find something related with downmix.
Downmix is not supported by libdcadec right now. You would have to do that yourself.
Also note that you cannot re-create the context for every audio frame that needs decoding, you need to create it once and keep it (and flush it when seeking), otherwise history information needed for the lossless reconstruction is lost.
dcadec.c provides high level example of how to use the API to decode raw DTS stream from external file.
Decoding from other source (e.g. memory buffer) is more complicated since there is no high level API for this yet. Basically, libdcadec requires the API user to parse DTS bitstream to some degree to find frame boundaries and convert the bitstream into 16-bit big-endian format required by libdcadec. You can check FFmpeg DTS parser code for implementation details, or dca_stream.c code, which is somewhat less capable than FFmpeg's (no support for 14-bit words).
Downmixing to stereo using custom matrix coefficients embedded into DTS stream is a planned feature, but yet unsupported. Right now libdcadec always outputs the maximum number of channels it can decode.
Well I though that support for dowmixed output was there and better than libdca
. I really hope for a version that will make it possible.
It would be nice API also to support reading from other source (memory buffer).
Keep up the good work @foo86 I will monitor frequently the progress.