Results 114 comments of Lukasz Stafiniak

To clarify the wording: the _tensors_ are temporary, but the _arrays_ persist across the optimizer steps. This makes sense in OCANNL -- in PyTorch and and similar frameworks the arrays...

I no longer remember what the issue is about... Maybe memory modes make the situation clearer.

There's a related issue that I will fix tomorrow: at link time, require that tensor nodes in the graph of a tensor are either embedded, or already part of the...

This will be mostly fixed soon, except for the case where multiple virtual tensors share the same for loop. It's because I'm introducing filtering of virtual / non-virtual getters &...

There's a loosely related issue that I will fix tomorrow: at link time, require that tensor nodes in the graph of a tensor are either embedded, or already part of...

User knows what user does? Maybe this will resurface once there's evidence it's helpful.

The debug logs file is already much shorter with passing around only the single-propagation environment, which was independently required to not pollute projections across propagation steps. So maybe no work...

With new nomenclature, `reference_lower`, `cpu_friendly_lower`, `cuda_friendly_lower`.

I don't think this is the right approach right now. There will be generic optimizations: reordering loop nesting for data locality, tiling. They can start with an already lowered representation....

It's probably more interesting than it is practical...