Hanusz Leszek
Hanusz Leszek

@gojosh which font did you use?
Using your font in a banner with my icon: 
There is the [lxe/simple-llama-finetuner](https://github.com/lxe/simple-llama-finetuner) repo available for finetuning but you need a GPU with at least 16GB VRAM to finetune the 7B model.
Hey, Thanks! I notice though that it works now on the endpoint `https://countries.trevorblades.workers.dev/graphql` but not on `https://countries.trevorblades.com/graphql` Is that normal?
I understand. Sorry I don't have any recommendations.
I'm on Ubuntu 23.04 I tried to install the Python package on conda envs with Python version 3.9, 3.10 and 3.11 and it fails every time. For Python 3.9 and...
Note that we would also need to add [aiofiles](https://pypi.org/project/aiofiles/) as a dependency.
Something like [transport-level batching](https://www.apollographql.com/blog/apollo-client/performance/query-batching/#transport-level-batching)? Yes, that could be useful. - Do you know a public GraphQL server configured for this that we could use for tests? - On which transport...
I see. So you want to mix two different things: - batching (to merge multiple queries in a single one) - lazy loading (to avoid needing to send a query...