openai-cookbook
openai-cookbook copied to clipboard
Examples and guides for using the OpenAI API
When I run the `examples/Visualizing_embeddings_in_2D.ipynb` notebook, I get an error during `tsne.fit_transform(matrix)`: ``` 791 def _check_params_vs_input(self, X): --> 792 if self.perplexity >= X.shape[0]: 793 raise ValueError("perplexity must be less than...
Fixing issue encountered when using this in some contexts (like virtual environments in notebooks). @sorinsuciu-msft FYI
Hi I'm trying to finetune a model, but got stuck at file upload. The file status always becomes failed after upload. I tried everything including the exact same sample [here](https://github.com/openai/openai-cookbook/blob/main/examples/azure/finetuning.ipynb)...
I am trying to get the notebooks in the [`fine-tuned_qa`](https://github.com/openai/openai-cookbook/tree/main/examples/fine-tuned_qa). I am experiencing an error with notebook, specifically in the code block with the following code ```python for name, is_disc...
We can hit the API and create it ourselves but a pre-included file like embedded_1k_reviews.csv would make evaluating, following the API, testing what analysis might be worth it etc. more...
The link to download `fine_food_reviews_with_embeddings_1k.csv` in the [zero-shot classification example](https://github.com/openai/openai-cookbook/blob/838f000935d9df03e75e181cbcea2e306850794b/examples/Zero-shot_classification_with_embeddings.ipynb) is either outdated or the column headings are incorrect. It's still serving embeddings from babbage instead of ada at the...
I'm getting an error when running the line `df["ada_similarity"] = df.ada_similarity.apply(eval).apply(np.array)` from example https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb. The error I'm getting is: eval() arg 1 must be a string, bytes or code object...
The github UI doesn't allow you to horizontally scroll code blocks, and this one is too long to display even with a maximized window. Let's line-wrap the comment for readability.
If you download the .csv from the provided CDN URL, it was generated before the model consolidation and still has babbage_search in its header row. Commented two different workarounds for...