Robert Forkel
Robert Forkel
For me, one of the guiding principles of CLDF is that only things that have clear, automated applications can be specified. So there's no need to come up with a...
Well, I think in the absence of clear-cut analysis tools/methods, I'd just keep LGR text/gloss **as is** in some non-CLDF-standard column if round-tripping is important. So far, I haven't seen...
I think parsing LGR unambiguously is already not entirely trivial (see https://github.com/cldf/pyigt/pull/15). So I'm not sure a lot of automated analysis can be built on the finer points of it.
Ok, good examples. So there is good reason to add special support for bivariate functions - without going to multivariate > 2 right away, correct?
Then I'd say the most pragmatic encoding would be a CLDF component for bi-variate parameters with - a required `BivariateValueTable` - an optional `BivariateParameterTable`, adding metadata about the parameters.
@SimonGreenhill @LinguList do we have any concrete examples for this, i.e. datasets where such data is readily available - for testing and as examples?
Here's another set of distance matrices which would profit from CLDF being able to model these efficiently: https://doi.org/10.5281/zenodo.7973820 > Euclide, the crow, the wolf and the pedestrian: distance metrics for...
I think cldf-datasets is fine.
From a database perspective, things with the same properties should go in the same table. So since your multiple language tables have a lot of columns in common, they should...
One possible way to deal with paradigms would be via Grammaticon - where paradigms could be curated as sets of grammatical concepts. Then, counterparts for paradigm cells could simply be...