langfuse-python icon indicating copy to clipboard operation
langfuse-python copied to clipboard

chore(deps): bump llama-index from 0.10.33 to 0.10.68

Open dependabot[bot] opened this issue 1 year ago • 0 comments

Bumps llama-index from 0.10.33 to 0.10.68.

Release notes

Sourced from llama-index's releases.

2024-08-21 (v0.10.68)

llama-index-core [0.10.68]

  • remove nested progress bars in base element node parser (#15550)
  • Adding exhaustive docs for workflows (#15556)
  • Adding multi-strategy workflow with reflection notebook example (#15445)
  • remove openai dep from core (#15527)
  • Improve token counter to handle more response types (#15501)
  • feat: Allow using step decorator without parentheses (#15540)
  • feat: workflow services (aka nested workflows) (#15325)
  • Remove requirement to specify "allowed_query_fields" parameter when using "cypher_validator" in TextToCypher retriever (#15506)

llama-index-embeddings-mistralai [0.1.6]

  • fix mistral embeddings usage (#15508)

llama-index-embeddings-ollama [0.2.0]

  • use ollama client for embeddings (#15478)

llama-index-embeddings-openvino [0.2.1]

  • support static input shape for openvino embedding and reranker (#15521)

llama-index-graph-stores-neptune [0.1.8]

  • Added code to expose structured schema for Neptune (#15507)

llama-index-llms-ai21 [0.3.2]

  • Integration: AI21 Tools support (#15518)

llama-index-llms-bedrock [0.1.13]

  • Support token counting for llama-index integration with bedrock (#15491)

llama-index-llms-cohere [0.2.2]

  • feat: add tool calling support for achat cohere (#15539)

llama-index-llms-gigachat [0.1.0]

  • Adding gigachat LLM support (#15313)

llama-index-llms-openai [0.1.31]

  • Fix incorrect type in OpenAI token usage report (#15524)
  • allow streaming token counts for openai (#15548)

llama-index-postprocessor-nvidia-rerank [0.2.1]

... (truncated)

Changelog

Sourced from llama-index's changelog.

llama-index-core [0.10.68]

  • remove nested progress bars in base element node parser (#15550)
  • Adding exhaustive docs for workflows (#15556)
  • Adding multi-strategy workflow with reflection notebook example (#15445)
  • remove openai dep from core (#15527)
  • Improve token counter to handle more response types (#15501)
  • feat: Allow using step decorator without parentheses (#15540)
  • feat: workflow services (aka nested workflows) (#15325)
  • Remove requirement to specify "allowed_query_fields" parameter when using "cypher_validator" in TextToCypher retriever (#15506)

llama-index-embeddings-mistralai [0.1.6]

  • fix mistral embeddings usage (#15508)

llama-index-embeddings-ollama [0.2.0]

  • use ollama client for embeddings (#15478)

llama-index-embeddings-openvino [0.2.1]

  • support static input shape for openvino embedding and reranker (#15521)

llama-index-graph-stores-neptune [0.1.8]

  • Added code to expose structured schema for Neptune (#15507)

llama-index-llms-ai21 [0.3.2]

  • Integration: AI21 Tools support (#15518)

llama-index-llms-bedrock [0.1.13]

  • Support token counting for llama-index integration with bedrock (#15491)

llama-index-llms-cohere [0.2.2]

  • feat: add tool calling support for achat cohere (#15539)

llama-index-llms-gigachat [0.1.0]

  • Adding gigachat LLM support (#15313)

llama-index-llms-openai [0.1.31]

  • Fix incorrect type in OpenAI token usage report (#15524)
  • allow streaming token counts for openai (#15548)

llama-index-postprocessor-nvidia-rerank [0.2.1]

... (truncated)

Commits

Dependabot compatibility score

You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

dependabot[bot] avatar Aug 22 '24 05:08 dependabot[bot]