Gil
Gil
In the past I have successfully managed to get a speed up from parallel processing PDB files using C++. The way to go, in my opinion, is as follows: 1....
Hmmm, this would be more complicated, because we call get with a visitor pattern, so we have Any to visit on. What you need is a way to dispatch with...
@karlnapf this is just for classifiers, there should be more coming up soon.
> I would like to work on this. Shall I start by moving add to CMath? You shouldn’t move to CMath but to linalg instead. However linalg already has add....
hmm, I think you are confusing things. `operator+` in linalg is from Eigen, which uses SIMD instructions where possible. `result_eig = alpha * a_eig + beta * b_eig;` is written...
@karlnapf @vigsterkr not sure this is worth working on? I mean D seems to be a great language but it isn't used much?
@vigsterkr I copied a few things from deadbeef to add RapidJSON as a dependency
I have a local version mostly working (with binary classification tasks at least). There are some memory issues but they will be fixed once we switch to smart pointers.
@karlnapf so this works now, a minimal example looks like this: ```cpp #include #include #include #include #include using namespace shogun; int main() { init_shogun_with_defaults(); sg_io->set_loglevel(MSG_GCDEBUG); auto flow = OpenMLFlow::download_flow("9602", "");...
>Ah yes, a meta example would be good for that indeed. Is it reasonable that the smart pointer PR will be merged soon? Need to ask @vigsterkr, last week he...