hash icon indicating copy to clipboard operation
hash copied to clipboard

Update npm package `llamaindex` to v0.7.0

Open hash-worker[bot] opened this issue 1 year ago • 2 comments

This PR contains the following updates:

Package Type Update Change Pending
llamaindex (source) dependencies minor 0.2.10 -> 0.7.0 0.7.3 (+2)

Release Notes

run-llama/LlamaIndexTS (llamaindex)

v0.7.0

Compare Source

Minor Changes
  • 1364e8e: update metadata extractors to use PromptTemplate
  • 96fc69c: Correct initialization of QuestionsAnsweredExtractor so that it uses the promptTemplate arg when passed in
Patch Changes

v0.6.22

Compare Source

Patch Changes
  • 5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads

v0.6.21

Compare Source

Patch Changes
  • 6f75306: feat: support metadata filters for AstraDB
  • 94cb4ad: feat: Add metadata filters to ChromaDb and update to 1.9.2

v0.6.20

Compare Source

Patch Changes

v0.6.19

Compare Source

Patch Changes
  • 62cba52: Add ensureIndex function to LlamaCloudIndex
  • d265e96: fix: ignore resolving unpdf for nextjs
  • d30bbf7: Convert undefined values to null in LlamaCloud filters
  • 53fd00a: Fix getPipelineId in LlamaCloudIndex

v0.6.18

Compare Source

Patch Changes

v0.6.17

Compare Source

Patch Changes

v0.6.16

Compare Source

Patch Changes

v0.6.15

Compare Source

Patch Changes

v0.6.14

Compare Source

Patch Changes

v0.6.13

Compare Source

Patch Changes

v0.6.12

Compare Source

Patch Changes
  • f7b4e94: feat: add filters for pinecone
  • 78037a6: fix: bypass service context embed model
  • 1d9e3b1: fix: export llama reader in non-nodejs runtime

v0.6.11

Compare Source

Patch Changes

v0.6.10

Compare Source

Patch Changes

v0.6.9

Compare Source

Patch Changes

v0.6.8

Compare Source

Patch Changes

v0.6.7

Compare Source

Patch Changes
  • 23bcc37: fix: add serializer in doc store

    PostgresDocumentStore now will not use JSON.stringify for better performance

v0.6.6

Compare Source

Patch Changes

v0.6.5

Compare Source

Patch Changes
  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

v0.6.4

Compare Source

Patch Changes

v0.6.3

Compare Source

Patch Changes

v0.6.2

Compare Source

Patch Changes
  • 5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads

v0.6.1

Compare Source

Patch Changes
  • 62cba52: Add ensureIndex function to LlamaCloudIndex
  • d265e96: fix: ignore resolving unpdf for nextjs
  • d30bbf7: Convert undefined values to null in LlamaCloud filters
  • 53fd00a: Fix getPipelineId in LlamaCloudIndex

v0.6.0

Compare Source

Minor Changes
Patch Changes

v0.5.27

Compare Source

Patch Changes
  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

v0.5.26

Compare Source

Patch Changes
  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

v0.5.25

Compare Source

Patch Changes
  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

v0.5.24

Compare Source

Patch Changes

v0.5.23

Compare Source

Patch Changes

v0.5.22

Compare Source

Patch Changes

v0.5.21

Compare Source

Patch Changes
  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

v0.5.20

Compare Source

Patch Changes
  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

v0.5.19

Compare Source

Patch Changes
  • fcbf183: implement llamacloud file service

v0.5.18

Compare Source

Patch Changes

v0.5.17

Compare Source

Patch Changes
  • c654398: Implement Weaviate Vector Store in TS

v0.5.16

Compare Source

Patch Changes

v0.5.15

Compare Source

Patch Changes
  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

v0.5.14

Compare Source

Patch Changes
  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

v0.5.13

Compare Source

Patch Changes

v0.5.12

Compare Source

Patch Changes

v0.5.11

Compare Source

Patch Changes

v0.5.10

Compare Source

Patch Changes

v0.5.9

Compare Source

Patch Changes
  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version, but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

v0.5.8

Compare Source

Patch Changes
  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

v0.5.7

Compare Source

Patch Changes
  • ec59acd: fix: bundling issue with pnpm

v0.5.6

Compare Source

Patch Changes

v0.5.5

Compare Source

Patch Changes

v0.5.4

Compare Source

Patch Changes
  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

v0.5.3

Compare Source

Patch Changes

v0.5.2

Compare Source

Patch Changes
  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

v0.5.1

Compare Source

Patch Changes
  • fcbf183: implement llamacloud file service

v0.5.0

Compare Source

Minor Changes
  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

v0.4.14

Compare Source

Patch Changes

v0.4.13

Compare Source

Patch Changes
  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

v0.4.12

Compare Source

Patch Changes

v0.4.11

Compare Source

Patch Changes
  • 8bf5b4a: fix: llama parse input spreadsheet

v0.4.10

Compare Source

Patch Changes
  • 7dce3d2: fix: disable External Filters for Gemini

v0.4.9

Compare Source

Patch Changes
  • 3a96a48: fix: anthroipic image input

v0.4.8

Compare Source

Patch Changes
  • 83ebdfb: fix: next.js build error

v0.4.6

Compare Source

Patch Changes
  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

v0.4.5

Compare Source

Patch Changes
  • 6c3e5d0: fix: switch to correct reference for a static function

v0.4.4

Compare Source

Patch Changes
  • 42eb73a: Fix IngestionPipeline not working without vectorStores

v0.4.3

Compare Source

Patch Changes

v0.4.1

Compare Source

Patch Changes

v0.4.0

Compare Source

Minor Changes
  • 436bc41: Unify chat engine response and agent response
Patch Changes
  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

v0.3.17

Compare Source

Patch Changes
  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

v0.3.16

Compare Source

Patch Changes
  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

v0.3.15

Compare Source

Patch Changes
  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

v0.3.14

Compare Source

Patch Changes
  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

v0.3.13

Compare Source

Patch Changes
  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

v0.3.12

Compare Source

Patch Changes

v0.3.11

Compare Source

Patch Changes
  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

v0.3.10

Compare Source

Patch Changes

v0.3.9

Compare Source

Patch Changes
  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

v0.3.8

Compare Source

Patch Changes
  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

v0.3.7

Compare Source

Patch Changes
  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

v0.3.6

Compare Source

Patch Changes

v0.3.5

Compare Source

Patch Changes

v0.3.4

Compare Source

Patch Changes
  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

v0.3.3

Compare Source

Patch Changes
  • e8c41c5: fix: wrong gemini streaming chat response

[`v0.3


Configuration

📅 Schedule: Branch creation - "before 4am every weekday,every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • [ ] If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

hash-worker[bot] avatar Sep 15 '24 00:09 hash-worker[bot]

Codecov Report

Attention: Patch coverage is 0% with 42 lines in your changes missing coverage. Please review.

Project coverage is 21.72%. Comparing base (60e2a5a) to head (99efe74). Report is 5 commits behind head on main.

Files with missing lines Patch % Lines
...llower-agent/llama-index/simple-storage-context.ts 0.00% 22 Missing :warning:
.../link-follower-agent/llama-index/index-pdf-file.ts 0.00% 20 Missing :warning:
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5154   +/-   ##
=======================================
  Coverage   21.72%   21.72%           
=======================================
  Files         566      566           
  Lines       19157    19157           
  Branches     2752     2755    +3     
=======================================
  Hits         4162     4162           
  Misses      14943    14943           
  Partials       52       52           
Flag Coverage Δ
apps.hash-ai-worker-ts 1.32% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Sep 26 '24 19:09 codecov[bot]

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

⚠️ Warning: custom changes will be lost.

hash-worker[bot] avatar Dec 18 '24 15:12 hash-worker[bot]

@indietyp Do we want to address these Semgrep flags here, or were they pre-existing in the codebase already?

vilkinsons avatar Dec 18 '24 17:12 vilkinsons

these are from a previous iteration that I missed, will fix them!

indietyp avatar Dec 18 '24 17:12 indietyp

Benchmark results

@rust/hash-graph-benches – Integrations

representative_read_entity

Function Value Mean Flame graphs
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/person/v/1 $$16.3 \mathrm{ms} \pm 176 \mathrm{μs}\left({\color{lightgreen}-29.146 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/block/v/1 $$16.8 \mathrm{ms} \pm 192 \mathrm{μs}\left({\color{red}8.76 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/book/v/1 $$16.1 \mathrm{ms} \pm 161 \mathrm{μs}\left({\color{gray}-4.451 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/song/v/1 $$17.4 \mathrm{ms} \pm 164 \mathrm{μs}\left({\color{gray}3.57 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/page/v/2 $$16.2 \mathrm{ms} \pm 166 \mathrm{μs}\left({\color{gray}2.12 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/playlist/v/1 $$15.0 \mathrm{ms} \pm 142 \mathrm{μs}\left({\color{lightgreen}-11.985 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/building/v/1 $$15.7 \mathrm{ms} \pm 138 \mathrm{μs}\left({\color{lightgreen}-6.834 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/organization/v/1 $$16.2 \mathrm{ms} \pm 167 \mathrm{μs}\left({\color{lightgreen}-29.548 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/uk-address/v/1 $$15.6 \mathrm{ms} \pm 170 \mathrm{μs}\left({\color{lightgreen}-7.150 \mathrm{\%}}\right) $$ Flame Graph

representative_read_multiple_entities

Function Value Mean Flame graphs
entity_by_property depths: DT=255, PT=255, ET=255, E=255 $$68.4 \mathrm{ms} \pm 313 \mathrm{μs}\left({\color{gray}-0.362 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=0, E=0 $$40.8 \mathrm{ms} \pm 248 \mathrm{μs}\left({\color{gray}-1.927 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=2, PT=2, ET=2, E=2 $$59.3 \mathrm{ms} \pm 338 \mathrm{μs}\left({\color{gray}0.154 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=0, E=2 $$45.1 \mathrm{ms} \pm 145 \mathrm{μs}\left({\color{gray}-0.414 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=2, E=2 $$50.6 \mathrm{ms} \pm 261 \mathrm{μs}\left({\color{gray}-0.907 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=2, ET=2, E=2 $$55.1 \mathrm{ms} \pm 234 \mathrm{μs}\left({\color{gray}-1.729 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=255, PT=255, ET=255, E=255 $$105 \mathrm{ms} \pm 478 \mathrm{μs}\left({\color{gray}-0.025 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=0, E=0 $$40.7 \mathrm{ms} \pm 203 \mathrm{μs}\left({\color{gray}-0.296 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=2, PT=2, ET=2, E=2 $$95.7 \mathrm{ms} \pm 562 \mathrm{μs}\left({\color{gray}-0.346 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=0, E=2 $$80.4 \mathrm{ms} \pm 443 \mathrm{μs}\left({\color{gray}1.25 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=2, E=2 $$87.4 \mathrm{ms} \pm 347 \mathrm{μs}\left({\color{gray}-0.442 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=2, ET=2, E=2 $$93.2 \mathrm{ms} \pm 451 \mathrm{μs}\left({\color{gray}0.767 \mathrm{\%}}\right) $$ Flame Graph

representative_read_entity_type

Function Value Mean Flame graphs
get_entity_type_by_id Account ID: d4e16033-c281-4cde-aa35-9085bf2e7579 $$2.12 \mathrm{ms} \pm 7.51 \mathrm{μs}\left({\color{gray}0.199 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_complete_one_depth

Function Value Mean Flame graphs
entity_by_id 50 entities $$5.63 \mathrm{s} \pm 283 \mathrm{ms}\left({\color{red}5.46 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 5 entities $$27.1 \mathrm{ms} \pm 201 \mathrm{μs}\left({\color{gray}0.994 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1 entities $$20.5 \mathrm{ms} \pm 72.1 \mathrm{μs}\left({\color{gray}-0.204 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 10 entities $$57.3 \mathrm{ms} \pm 245 \mathrm{μs}\left({\color{gray}-1.412 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 25 entities $$84.4 \mathrm{ms} \pm 277 \mathrm{μs}\left({\color{lightgreen}-51.853 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_linkless

Function Value Mean Flame graphs
entity_by_id 1 entities $$1.95 \mathrm{ms} \pm 7.24 \mathrm{μs}\left({\color{gray}0.772 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 100 entities $$2.15 \mathrm{ms} \pm 5.89 \mathrm{μs}\left({\color{gray}1.60 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 10 entities $$1.95 \mathrm{ms} \pm 4.38 \mathrm{μs}\left({\color{gray}1.46 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1000 entities $$2.93 \mathrm{ms} \pm 13.7 \mathrm{μs}\left({\color{gray}2.19 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 10000 entities $$13.5 \mathrm{ms} \pm 54.6 \mathrm{μs}\left({\color{red}32.5 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_complete_zero_depth

Function Value Mean Flame graphs
entity_by_id 50 entities $$4.19 \mathrm{ms} \pm 38.4 \mathrm{μs}\left({\color{gray}4.00 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 5 entities $$1.94 \mathrm{ms} \pm 9.12 \mathrm{μs}\left({\color{gray}-0.376 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1 entities $$1.95 \mathrm{ms} \pm 10.1 \mathrm{μs}\left({\color{gray}1.30 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 10 entities $$2.12 \mathrm{ms} \pm 12.6 \mathrm{μs}\left({\color{gray}-0.813 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 25 entities $$3.28 \mathrm{ms} \pm 9.09 \mathrm{μs}\left({\color{gray}-0.381 \mathrm{\%}}\right) $$ Flame Graph

github-actions[bot] avatar Dec 18 '24 18:12 github-actions[bot]