ludwig
ludwig copied to clipboard
Low-code framework for building custom LLMs, neural networks, and other AI models
Here's the stack trace: (_shuffle_map pid=954) Traceback (most recent call last): (_shuffle_map pid=954) File "python/ray/_raylet.pyx", line 726, in ray._raylet.execute_task (_shuffle_map pid=954) File "python/ray/_raylet.pyx", line 727, in ray._raylet.execute_task (_shuffle_map pid=954) File...
**Context** I’d seen a higher AUROC (> 0.8) on a dataset in the logs printed out by one specific trial as in screenshot 1, but in the overall summary for...
**Is your feature request related to a problem? Please describe.** Is it possible do active learning based on the current master branch? Any clue will be highly appreciated.
**Describe the bug** > Hi fire! I worked on some of the torchscript stuff so can help out here a bit. > > Next step is probably checking that the...
Noticed a bug while transforming some schemas, since this is just a pure dataclass it has no automatic JSON representation atm. CC @connor-mccorm @justinxzhao
We should update the `__init__.py` in schema to validate the schema for the global defaults section of Ludwig. The updated config will look like this ```yaml preprocessing: split: type: random...
Currently, we default `gpu_resources_per_trial` to 0 if unspecified (see [here](https://github.com/ludwig-ai/ludwig/blob/master/ludwig/hyperopt/execution.py#L924)). However, this is counter-intuitive for users running on a GPU clusters. They would most of the time prefer to have...
This issue tracks the conversion of core pieces of the Ludwig schema to using marshmallow schemas: * [x] Combiners - #1347 * [x] Trainer - #1606 * [ ] Hyperopt...