David L. Qiu
David L. Qiu
@krassowski Thanks for the suggestions! Yes I agree, loading entry points asynchronously would definitely bring performance benefits. Right now, I think we're loading all entry points in giant code blocks...
@krassowski The example on the left is much faster because it's not doing anything besides creating a [coroutine object](https://docs.python.org/3/library/asyncio-task.html#id2). The body of `import_later()` isn't run until `await coroutine` or `loop.create_task(coroutine)`...
Actually I just realized that https://github.com/jupyter-server/jupyter_server/pull/1417 needs to be released first, before I can provide an example demonstrating my point.
We should be mindful of the difference between concurrency (doing multiple _interrupted_ tasks in the same thread of execution) and parallelism (doing multiple un-interrupted tasks in separate threads of execution)....
I've renamed the issue to focus on the speed of importing the `jupyter_ai` package. This isn't the only performance concern, but others can be tracked in separate issues. I've added...
Hmm, it seems like this is still an issue after the migration from LangChain to LiteLLM. Here is the output of that command on the `litellm` branch: ``` % jupyter...
Thank you for documenting this feature request! This is an interesting concept; I'll add this to the v3 milestone so our team can explore this.
Closing this issue as the author's issue was resolved. We've also improved our documentation regarding provider dependencies and how to make models available in the Jupyter AI settings. Feel free...
Just got pinged about this internally since our services run CVE scanners to find vulnerabilities in our dependencies. I agree that there's no security issue with this, but it would...
I'm comfortable with that too. I personally don't see a problem with vendoring an EOL dependency just to use the CSS files that we're interested in. We do vendor `yarn`...