hands-on-llms icon indicating copy to clipboard operation
hands-on-llms copied to clipboard

🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴

Results 9 hands-on-llms issues
Sort by recently updated
recently updated
newest added

Ran into a `Unable to find installation candidates for torch (2.1.2+cpu)` error on Macbook (Intel) when not specifying `platform` key (i.e. `torch = { platform = "linux", version = "2.0.1+cpu",...

I'm getting the below error while running `make dev_train_beam`. I couldn't able to figure it out. I'm working on Mac OS. Your help is much appreciated

Great course so far! I'm having some trouble running `make install` in the streaming_pipeline folder. Any suggestions? Main dependencies are all installed.

``` `Traceback` (most recent call last): File "/mnt/cephfs/home/shixun2024/miniconda3/envs/GengN/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/mnt/cephfs/home/shixun2024/miniconda3/envs/GengN/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/mnt/cephfs/home/shixun2024/users/GengNan/hands-on-llms/modules/training_pipeline/tools/train_run.py", line 83, in fire.Fire(train)...

Not sure why initiating Qdrant collection would not take the default value = 1 for `max_optimization_threads`, but for me making this change fixes the issue #72.

RUST_BACKTRACE=full poetry run python -m bytewax.run tools.run_real_time:build_flow 2024-03-16 01:16:27,321 - INFO - Initializing env vars... 2024-03-16 01:16:27,322 - INFO - Loading environment variables from: .env 2024-03-16 01:16:30,824 - INFO -...

When I executed "make run_real_time" in folder "hands-on-llms/modules/streaming_pipeline", following instruction, an exception came up. ``` RUST_BACKTRACE=full poetry run python -m bytewax.run tools.run_real_time:build_flow 2024-03-12 10:14:28,492 - INFO - Initializing env vars......

In your pipeline design, how do you implement Continuous Monitoring and Continuous Training, to account for language model drift over time?