Results 187 comments of Tom Dörr

`Suggest` takes the attribute `target_module`: ``` dspy.Suggest(is_assessment_yes(contact_person_assessment.assessment_answer), "The text contains names and titles not in the context. Please revise for accuracy.", target_module=GenerateMail) ``` I believe I renamed `GenerateMail ` to...

Would be great if this was handled better, just spent some time trying to debug this error. As I understand it this is also caused by MIPRO not supporting assertions....

This might not be working since evaluation and some optimizers are using threading themselves. Are you trying to speed up optimization or inference?

Yes I do: ![GPakSEEacAABQvc](https://github.com/stanfordnlp/dspy/assets/23431444/4b19273d-0f50-4a78-af62-3acc524c80b4) https://x.com/tom_doerr/status/1798806436476334123 This is some separate code I use somewhere for evaluation: ``` fewshot_optimizer = BootstrapFewShot(metric=great_tweet_metric, max_bootstrapped_demos=4, metric_threshold=metric_threshold) compile_start = time.time() threads = [] for dataset_idx in...

Is this related to parallelism? Can't see any code related to that

Could you just switch to a process-based worker model? That should still give you parallelism without needing to serialize the GPT3 instance.

As far as I know, it works using `uvicorn` or `guvicorn`

Not really sure why having multiple instances would trigger token or rate limits faster or how having it centralized helps in data retrieval. You could try to make it serializable,...

Similar issue: https://github.com/stanfordnlp/dspy/issues/1087 This is an issue with Groq I would wait until they support n > 1. From their API docs: ``` n integer or null Optional Defaults to...

Is `thread_count` set to 2? Maybe an issue with multithreading/batching