Thierry Jean

Results 97 comments of Thierry Jean

Hi everyone! We just merged #1104 which introduces caching as a core Hamilton feature. We invite you to try it via [Google Colab](https://colab.research.google.com/github/DAGWorks-Inc/hamilton/blob/main/examples/caching/tutorial.ipynb) and review the [docs](https://hamilton.dagworks.io/en/latest/concepts/caching/)!

Also, we should be able to cleanup config files such as `/.flake8` if we're now using Ruff and linting is specified in `pyproject.toml`

Status on this PR? The obfuscation feature could be valuable when passing the dlt config to LLMs via the MCP

To add: [torchmetrics.functional](https://github.com/Lightning-AI/torchmetrics/tree/master/src/torchmetrics/functional) has a lot of metrics implemented for PyTorch tensors in functional form. The typical input is `metric(predicted, true, **kwargs)`, which maps to table columns. > A challenge...

@skrawcz Changes to `Parallelizable/Collect` could break caching, so it's worth adding a few tests to `tests/caching/test_integration.py`. If facing issues, the work by @cswartzvi on `TaskExecutionHook` #1269 would help make caching...

One straightforward solution could be to use IPython. Via `from IPython.core.display import HTML` you can use `HTML(obj)` and get an HTML representation of any Python object. We could use that...

This could leverage the `Store` objects introduced with caching in #1104

@cswartzvi Great job on the PR! It makes me very happy to see docs pages included :smile: Regarding tuple length, I see the merit of your approach and implementation. For...

I agree with the motivation of the cited issue! But to add more context: - This discussion around chunking focuses on unstructured text where you don't have message ids, sections,...