Results 76 comments of Dean Kayton

This could also be relevant, at the moment it might not be feasible cost-wise to train such a model. And also, it is not expected to compete with the performance...

There is also Alpaca from Stanford. Might need to train in the cloud, and distribute the model to GPU-less nimble devices https://github.com/tatsu-lab/stanford_alpaca

Here is some insights into why you might want to fine tune, and also some alternatives to fine-tuning that are less resource intensive and more general purpose (using embeddings). https://bdtechtalks.com/2023/05/01/customize-chatgpt-llm-embeddings/...

Interesting, and on topic here - https://the-decoder.com/guanaco-is-a-chatgpt-competitor-trained-on-a-single-gpu-in-one-day/

Here is some more relevant information... [GPT4All: An ecosystem of open-source on-edge large language models.](https://github.com/nomic-ai/gpt4all) GPT4All is an ecosystem to train and deploy powerful and customized large language models that...

Asking questions across multiple dataframes would be very interesting. I was hoping to do something like this: ```python import pandas as pd import sketch from sqlalchemy import create_engine # Define...

Maybe it's better to import the automatic Supabase API into the FastAPI openai spec to unify the two APIs, so that they work side by side rather then one over...

I would be interested in reviving this PR. Do you think it needs to be specific to Prime Go? Or could the mapping apply to: - Prime 2/4/4+ - SC...

Yeah definitely. Develop them separately. Write common functions that each type can call with different parameters. Parameter changes and specific function changes don't need the other devices to verify. Only...

Same here, similar error messages on v14 and v15. Docker setup is broken out of the box. Had to create a .env file with `ERPNEXT_VERSION=v14` for example.