Joel
Joel
Great work Ravi - to articulate my point a bit better: - Spark is a massive framework designed for big data, so it either lives on a big pre-configured cluster...
I think this is the right approach - I know @imdoroshenko has had success with the [libcst](https://github.com/Instagram/LibCST) library too
One further point - I think this sessionless pipeline construction should live in kedro core longer term rather than just in Viz, lots of uses for other purposes.
Yeah perhaps AST isn't needed - the actual `pipeline` objects are valid Python without the context, catalog etc yet initialized. So yes all you need is the results of `find_pipelines()`...
Some thoughts - - `_find_kedro_project` could have a better name maybe `find_kedro_project_path`? - Can we do this analysis for `kedro-boot` IIRC there were lots of hacks needed to make the...
I'd love to have a broader conversation of dict unpacking > `Actually it is because it messes up kedro-mlflow automatic parameter tracking with parameters unrelated to the pipeline, but I...
Personally I've always found the decisions to restrict `oc.env` arbitrary and makes the user add one line to their `settings.py` for no reason
A user just reported that: `kedro run --conf-source` and settings.py `CONF_SOURCE` do different things https://kedro-org.slack.com/archives/C03RKP2LW64/p1741624675107849
So this is a great first start - the credentials resolution looks complicated, but also well thought out. I think we'd need to see some tests for this to go...
Perhaps the right way to contextualise this around a "Run experiments" workflow and make the tracked parameters the pieces that one can change?