Update npm package `llamaindex` to v0.7.0
This PR contains the following updates:
| Package | Type | Update | Change | Pending |
|---|---|---|---|---|
| llamaindex (source) | dependencies | minor | 0.2.10 -> 0.7.0 |
0.7.3 (+2) |
Release Notes
run-llama/LlamaIndexTS (llamaindex)
v0.7.0
Minor Changes
-
1364e8e: update metadata extractors to use PromptTemplate -
96fc69c: Correct initialization of QuestionsAnsweredExtractor so that it uses the promptTemplate arg when passed in
Patch Changes
-
3b7736f: feat: added gemini 002 support - Updated dependencies [
1364e8e] - Updated dependencies [
96fc69c]
v0.6.22
Patch Changes
-
5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads
v0.6.21
Patch Changes
-
6f75306: feat: support metadata filters for AstraDB -
94cb4ad: feat: Add metadata filters to ChromaDb and update to 1.9.2
v0.6.20
Patch Changes
-
6a9a7b1: fix: take init api key into account - Updated dependencies [
6a9a7b1]- @llamaindex/openai@0.1.16
- @llamaindex/groq@0.0.15
v0.6.19
Patch Changes
-
62cba52: Add ensureIndex function to LlamaCloudIndex -
d265e96: fix: ignore resolving unpdf for nextjs -
d30bbf7: Convert undefined values to null in LlamaCloud filters -
53fd00a: Fix getPipelineId in LlamaCloudIndex
v0.6.18
Patch Changes
-
5f67820: Fix that node parsers generate nodes with UUIDs -
fe08d04: Fix LlamaCloud retrieval with multiple pipelines - Updated dependencies [
5f67820]- @llamaindex/core@0.2.12
- @llamaindex/cloud@0.2.14
- @llamaindex/ollama@0.0.7
- @llamaindex/openai@0.1.15
- @llamaindex/groq@0.0.14
v0.6.17
Patch Changes
-
ee697fb: fix: generate uuid when inserting to Qdrant - Updated dependencies [
ee697fb]- @llamaindex/core@0.2.11
- @llamaindex/cloud@0.2.13
- @llamaindex/ollama@0.0.6
- @llamaindex/openai@0.1.14
- @llamaindex/groq@0.0.13
v0.6.16
Patch Changes
-
63e9846: fix: preFilters does not work with asQueryEngine -
6f3a31c: feat: add metadata filters for Qdrant vector store - Updated dependencies [
3489e7d] - Updated dependencies [
468bda5]- @llamaindex/core@0.2.10
- @llamaindex/cloud@0.2.12
- @llamaindex/ollama@0.0.5
- @llamaindex/openai@0.1.13
- @llamaindex/groq@0.0.12
v0.6.15
Patch Changes
-
2a82413: fix(core): setSettings.llmto OpenAI by default and support lazy load openai - Updated dependencies [
2a82413] - Updated dependencies [
0b20ff9]- @llamaindex/groq@0.0.11
- @llamaindex/openai@0.1.12
- @llamaindex/cloud@0.2.11
v0.6.14
Patch Changes
- Updated dependencies [
b17d439]- @llamaindex/core@0.2.9
- @llamaindex/ollama@0.0.4
- @llamaindex/openai@0.1.11
- @llamaindex/groq@0.0.10
v0.6.13
Patch Changes
- Updated dependencies [
981811e]
v0.6.12
Patch Changes
-
f7b4e94: feat: add filters for pinecone -
78037a6: fix: bypass service context embed model -
1d9e3b1: fix: export llama reader in non-nodejs runtime
v0.6.11
Patch Changes
-
df441e2: fix: consoleLogger is missing from@llamaindex/env - Updated dependencies [
df441e2]- @llamaindex/cloud@0.2.9
- @llamaindex/core@0.2.8
- @llamaindex/env@0.1.13
- @llamaindex/ollama@0.0.3
- @llamaindex/openai@0.1.10
- @llamaindex/groq@0.0.9
v0.6.10
Patch Changes
-
ebc5105: feat: support@vercel/postgres -
6cce3b1: feat: supportnpm:postgres - Updated dependencies [
96f72ad] - Updated dependencies [
6cce3b1]
v0.6.9
Patch Changes
- Updated dependencies [
ac41ed3]
v0.6.8
Patch Changes
-
8b7fdba: refactor: move chat engine & retriever into core.-
chatHistoryin BaseChatEngine now returnsChatMessage[] | Promise<ChatMessage[]>, instead ofBaseMemory - update
retrieve-endtype
-
-
Updated dependencies [
8b7fdba]
v0.6.7
Patch Changes
-
23bcc37: fix: addserializerin doc storePostgresDocumentStorenow will not use JSON.stringify for better performance
v0.6.6
Patch Changes
-
d902cc3: Fix context not being sent using ContextChatEngine -
025ffe6: fix: updatePostgresKVStoreconstructor params -
a659574: Adds upstash vector store as a storage - Updated dependencies [
d902cc3]
v0.6.5
Patch Changes
-
e9714db: feat: updatePGVectorStore- move constructor parameter
config.user|config.database|config.password|config.connectionStringintoconfig.clientConfig - if you pass
pg.Clientorpg.Poolinstance toPGVectorStore, move it toconfig.client, settingconfig.shouldConnectto false if it's already connected - default value of
PGVectorStore.collectionis now"data"instead of""(empty string)
- move constructor parameter
v0.6.4
Patch Changes
-
b48bcc3: feat: addload-transformersevent type when loading@xenova/transformersmoduleThis would benefit user who want to customize the transformer env.
-
Updated dependencies [
b48bcc3]
v0.6.3
Patch Changes
-
2cd1383: refactor: alignresponse-synthesizers&chat-enginemodule- builtin event system
- correct class extends
- aligin APIs, naming with llama-index python
- move stream out of first parameter to second parameter for the better tyep checking
- remove JSONQueryEngine in
@llamaindex/experimental, as the code quality is not satisify and we will bring it back later
-
5c4badb: Extend JinaAPIEmbedding parameters -
Updated dependencies [
fb36eff] -
Updated dependencies [
d24d3d1] -
Updated dependencies [
2cd1383]
v0.6.2
Patch Changes
-
5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads
v0.6.1
Patch Changes
-
62cba52: Add ensureIndex function to LlamaCloudIndex -
d265e96: fix: ignore resolving unpdf for nextjs -
d30bbf7: Convert undefined values to null in LlamaCloud filters -
53fd00a: Fix getPipelineId in LlamaCloudIndex
v0.6.0
Minor Changes
-
11feef8: Add workflows
Patch Changes
- Updated dependencies [
11feef8]
v0.5.27
Patch Changes
-
7edeb1c: feat: decouple openai fromllamaindexmoduleThis should be a non-breaking change, but just you can now only install
@llamaindex/openaito reduce the bundle size in the future -
Updated dependencies [
7edeb1c]
v0.5.26
Patch Changes
v0.5.25
Patch Changes
-
4810364: fix: handleRouterQueryEnginewith string query -
d3bc663: refactor: export vector store only in nodejs environment on top levelIf you see some missing modules error, please change vector store related imports to
llamaindex/vector-store -
Updated dependencies [
4810364]
v0.5.24
Patch Changes
- Updated dependencies [
0bf8d80]
v0.5.23
Patch Changes
- Updated dependencies [
711c814]- @llamaindex/core@0.1.12
v0.5.22
Patch Changes
-
4648da6: fix: wrong tiktoken version caused NextJs CL template run fail - Updated dependencies [
4648da6]- @llamaindex/env@0.1.10
- @llamaindex/core@0.1.11
v0.5.21
Patch Changes
-
ae1149f: feat: add JSON streaming to JSONReader -
2411c9f: Auto-create index for MongoDB vector store (if not exists) -
e8f229c: Remove logging from MongoDB Atlas Vector Store -
11b3856: implement filters for MongoDBAtlasVectorSearch -
83d7f41: Fix database insertion forPGVectorStoreIt will now:
- throw an error if there is an insertion error.
- Upsert documents with the same id.
- add all documents to the database as a single
INSERTcall (inside a transaction).
-
0148354: refactor: prompt systemAdd
PromptTemplatemodule with strong type check. -
1711f6d: Export imageToDataUrl for using images in chat -
Updated dependencies [
0148354]- @llamaindex/core@0.1.10
v0.5.20
Patch Changes
-
d9d6c56: Add support for MetadataFilters for PostgreSQL -
22ff486: Add tiktoken WASM to withLlamaIndex -
eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine
v0.5.19
Patch Changes
-
fcbf183: implement llamacloud file service
v0.5.18
Patch Changes
v0.5.17
Patch Changes
-
c654398: Implement Weaviate Vector Store in TS
v0.5.16
Patch Changes
v0.5.15
Patch Changes
v0.5.14
Patch Changes
-
c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure
v0.5.13
Patch Changes
- Updated dependencies [
04b2f8e]
v0.5.12
Patch Changes
-
345300f: feat: add splitByPage mode to LlamaParseReader -
da5cfc4: Add metadatafilter options to retriever constructors -
da5cfc4: Fix system prompt not used in ContextChatEngine - Updated dependencies [
0452af9]
v0.5.11
Patch Changes
- Updated dependencies [
1f680d7]
v0.5.10
Patch Changes
-
086b940: feat: add DeepSeek LLM -
5d5716b: feat: add a reader for JSON data -
91d02a4: feat: support transform component callable -
fb6db45: feat: add pageSeparator params to LlamaParseReader - Updated dependencies [
91d02a4]
v0.5.9
Patch Changes
-
15962b3: feat: node parser refactorAlign the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.
This change will not be considered a breaking change since it doesn't have a significant output difference from the last version, but some edge cases will change, like the page separator and parameter for the constructor.
-
Updated dependencies [
15962b3]
v0.5.8
Patch Changes
v0.5.7
Patch Changes
-
ec59acd: fix: bundling issue with pnpm
v0.5.6
Patch Changes
-
2562244: feat: add gpt4o-mini -
325aa51: Implement Jina embedding through Jina api -
ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments -
92f0782: feat: use query bundle -
6cf6ae6: feat: abstract query type -
b7cfe5b: fix: passing max_token option to replicate's api call - Updated dependencies [
6cf6ae6]
v0.5.5
Patch Changes
v0.5.4
Patch Changes
-
1a65ead: feat: add vendorMultimodal params to LlamaParseReader
v0.5.3
Patch Changes
-
9bbbc67: feat: add a reader for Discord messages -
b3681bf: fix: DataCloneError when using FunctionTool - Updated dependencies [
b3681bf]
v0.5.2
Patch Changes
-
7edeb1c: feat: decouple openai fromllamaindexmoduleThis should be a non-breaking change, but just you can now only install
@llamaindex/openaito reduce the bundle size in the future -
Updated dependencies [
7edeb1c]
v0.5.1
Patch Changes
-
fcbf183: implement llamacloud file service
v0.5.0
Minor Changes
-
16ef5dd: refactor: simplify callback managerChange
event.detail.payloadtoevent.detail
Patch Changes
-
16ef5dd: refactor: move callback manager & llm to core moduleFor people who import
llamaindex/llms/baseorllamaindex/llms/utils, use@llamaindex/core/llmsand@llamaindex/core/utilsinstead. -
36ddec4: fix: typo in custom page separator parameter for LlamaParse -
Updated dependencies [
16ef5dd] -
Updated dependencies [
16ef5dd] -
Updated dependencies [
36ddec4]
v0.4.14
Patch Changes
- Updated dependencies [
1c444d5]
v0.4.13
Patch Changes
-
e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader -
304484b: feat: add ignoreErrors flag to LlamaParseReader
v0.4.12
Patch Changes
v0.4.11
Patch Changes
-
8bf5b4a: fix: llama parse input spreadsheet
v0.4.10
Patch Changes
-
7dce3d2: fix: disable External Filters for Gemini
v0.4.9
Patch Changes
-
3a96a48: fix: anthroipic image input
v0.4.8
Patch Changes
-
83ebdfb: fix: next.js build error
v0.4.6
Patch Changes
-
1feb23b: feat: Gemini tool calling for agent support -
08c55ec: Add metadata to PDFs and use Uint8Array for readers content
v0.4.5
Patch Changes
-
6c3e5d0: fix: switch to correct reference for a static function
v0.4.4
Patch Changes
-
42eb73a: Fix IngestionPipeline not working without vectorStores
v0.4.3
Patch Changes
-
2ef62a9: feat: added support for embeddings via HuggingFace Inference API - Updated dependencies [
d4e853c] - Updated dependencies [
a94b8ec]
v0.4.1
Patch Changes
- Updated dependencies [
1c444d5]
v0.4.0
Minor Changes
-
436bc41: Unify chat engine response and agent response
Patch Changes
-
a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens -
a51ed8d: feat: add support for managed identity for Azure OpenAI -
d3b635b: fix: agents to use chat history
v0.3.17
Patch Changes
-
6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader -
bf25ff6: fix: polyfill for cloudflare worker -
e6d6576: chore: useunpdf
v0.3.16
Patch Changes
-
11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries -
631f000: feat: DeepInfra LLM implementation -
1378ec4: feat: set default model togpt-4o -
6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader -
4d4bd85: Show error message if agent tool is called with partial JSON -
24a9d1e: add json mode and image retrieval to LlamaParseReader -
45952de: add concurrency management for SimpleDirectoryReader -
54230f0: feat: Gemini GA release models -
a29d835: setDocumentHash should be async -
73819bf: Unify metadata and ID handling of documents, allow files to be read byBuffer
v0.3.15
Patch Changes
-
6e156ed: Use images in context chat engine -
265976d: fix bug with node decorator -
8e26f75: Add retrieval for images using multi-modal messages
v0.3.14
Patch Changes
-
6ff7576: Added GPT-4o for Azure -
94543de: Added the latest preview gemini models and multi modal images taken into account
v0.3.13
Patch Changes
-
1b1081b: Add vectorStores to storage context to define vector store per modality -
37525df: Added support for accessing Gemini via Vertex AI -
660a2b3: Fix text before heading in markdown reader -
a1f2475: Add system prompt to ContextChatEngine
v0.3.12
Patch Changes
-
34fb1d8: fix: cloudflare dev
v0.3.11
Patch Changes
-
e072c45: fix: remove non-standard APIpipeline -
9e133ac: refactor: removedefaultFSfrom parametersWe don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.
This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.
-
447105a: Improve Gemini message and context preparation -
320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues) -
Updated dependencies [
e072c45] -
Updated dependencies [
9e133ac]
v0.3.10
Patch Changes
-
4aba02e: feat: support gpt4-o
v0.3.9
Patch Changes
-
c3747d0: fix: import@xenova/transformersFor now, if you use llamaindex in next.js, you need to add a plugin from
llamaindex/nextto ensure some module resolutions are correct.
v0.3.8
Patch Changes
-
ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content
v0.3.7
Patch Changes
v0.3.6
Patch Changes
v0.3.5
Patch Changes
-
bc7a11c: fix: inline ollama build -
2fe2b81: fix: filter with multiple filters in ChromaDB -
5596e31: feat: improve@llamaindex/env -
e74fe88: fix: change <-> to <=> in the SELECT query -
be5df5b: fix: anthropic agent on multiple chat - Updated dependencies [
5596e31]
v0.3.4
Patch Changes
-
1dce275: fix: exportStorageContexton edge runtime -
d10533e: feat: add hugging face llm -
2008efe: feat: add verbose mode to Agent -
5e61934: fix: remove clone object inCallbackManager.dispatchEvent -
9e74a43: feat: add top k toasQueryEngine -
ee719a1: fix: streaming for ReAct Agent
v0.3.3
Patch Changes
-
e8c41c5: fix: wrong gemini streaming chat response
[`v0.3
Configuration
📅 Schedule: Branch creation - "before 4am every weekday,every weekend" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- [ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
Codecov Report
Attention: Patch coverage is 0% with 42 lines in your changes missing coverage. Please review.
Project coverage is 21.72%. Comparing base (
60e2a5a) to head (99efe74). Report is 5 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #5154 +/- ##
=======================================
Coverage 21.72% 21.72%
=======================================
Files 566 566
Lines 19157 19157
Branches 2752 2755 +3
=======================================
Hits 4162 4162
Misses 14943 14943
Partials 52 52
| Flag | Coverage Δ | |
|---|---|---|
| apps.hash-ai-worker-ts | 1.32% <0.00%> (ø) |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Edited/Blocked Notification
Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.
You can manually request rebase by checking the rebase/retry box above.
⚠️ Warning: custom changes will be lost.
@indietyp Do we want to address these Semgrep flags here, or were they pre-existing in the codebase already?
these are from a previous iteration that I missed, will fix them!
Benchmark results
@rust/hash-graph-benches – Integrations
representative_read_entity
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/person/v/1 |
$$16.3 \mathrm{ms} \pm 176 \mathrm{μs}\left({\color{lightgreen}-29.146 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/block/v/1 |
$$16.8 \mathrm{ms} \pm 192 \mathrm{μs}\left({\color{red}8.76 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/book/v/1 |
$$16.1 \mathrm{ms} \pm 161 \mathrm{μs}\left({\color{gray}-4.451 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/song/v/1 |
$$17.4 \mathrm{ms} \pm 164 \mathrm{μs}\left({\color{gray}3.57 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/page/v/2 |
$$16.2 \mathrm{ms} \pm 166 \mathrm{μs}\left({\color{gray}2.12 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/playlist/v/1 |
$$15.0 \mathrm{ms} \pm 142 \mathrm{μs}\left({\color{lightgreen}-11.985 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/building/v/1 |
$$15.7 \mathrm{ms} \pm 138 \mathrm{μs}\left({\color{lightgreen}-6.834 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/organization/v/1 |
$$16.2 \mathrm{ms} \pm 167 \mathrm{μs}\left({\color{lightgreen}-29.548 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | entity type ID: https://blockprotocol.org/@alice/types/entity-type/uk-address/v/1 |
$$15.6 \mathrm{ms} \pm 170 \mathrm{μs}\left({\color{lightgreen}-7.150 \mathrm{\%}}\right) $$ | Flame Graph |
representative_read_multiple_entities
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| entity_by_property | depths: DT=255, PT=255, ET=255, E=255 | $$68.4 \mathrm{ms} \pm 313 \mathrm{μs}\left({\color{gray}-0.362 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_property | depths: DT=0, PT=0, ET=0, E=0 | $$40.8 \mathrm{ms} \pm 248 \mathrm{μs}\left({\color{gray}-1.927 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_property | depths: DT=2, PT=2, ET=2, E=2 | $$59.3 \mathrm{ms} \pm 338 \mathrm{μs}\left({\color{gray}0.154 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_property | depths: DT=0, PT=0, ET=0, E=2 | $$45.1 \mathrm{ms} \pm 145 \mathrm{μs}\left({\color{gray}-0.414 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_property | depths: DT=0, PT=0, ET=2, E=2 | $$50.6 \mathrm{ms} \pm 261 \mathrm{μs}\left({\color{gray}-0.907 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_property | depths: DT=0, PT=2, ET=2, E=2 | $$55.1 \mathrm{ms} \pm 234 \mathrm{μs}\left({\color{gray}-1.729 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=255, PT=255, ET=255, E=255 | $$105 \mathrm{ms} \pm 478 \mathrm{μs}\left({\color{gray}-0.025 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=0, PT=0, ET=0, E=0 | $$40.7 \mathrm{ms} \pm 203 \mathrm{μs}\left({\color{gray}-0.296 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=2, PT=2, ET=2, E=2 | $$95.7 \mathrm{ms} \pm 562 \mathrm{μs}\left({\color{gray}-0.346 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=0, PT=0, ET=0, E=2 | $$80.4 \mathrm{ms} \pm 443 \mathrm{μs}\left({\color{gray}1.25 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=0, PT=0, ET=2, E=2 | $$87.4 \mathrm{ms} \pm 347 \mathrm{μs}\left({\color{gray}-0.442 \mathrm{\%}}\right) $$ | Flame Graph |
| link_by_source_by_property | depths: DT=0, PT=2, ET=2, E=2 | $$93.2 \mathrm{ms} \pm 451 \mathrm{μs}\left({\color{gray}0.767 \mathrm{\%}}\right) $$ | Flame Graph |
representative_read_entity_type
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| get_entity_type_by_id | Account ID: d4e16033-c281-4cde-aa35-9085bf2e7579 |
$$2.12 \mathrm{ms} \pm 7.51 \mathrm{μs}\left({\color{gray}0.199 \mathrm{\%}}\right) $$ | Flame Graph |
scaling_read_entity_complete_one_depth
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| entity_by_id | 50 entities | $$5.63 \mathrm{s} \pm 283 \mathrm{ms}\left({\color{red}5.46 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 5 entities | $$27.1 \mathrm{ms} \pm 201 \mathrm{μs}\left({\color{gray}0.994 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 1 entities | $$20.5 \mathrm{ms} \pm 72.1 \mathrm{μs}\left({\color{gray}-0.204 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 10 entities | $$57.3 \mathrm{ms} \pm 245 \mathrm{μs}\left({\color{gray}-1.412 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 25 entities | $$84.4 \mathrm{ms} \pm 277 \mathrm{μs}\left({\color{lightgreen}-51.853 \mathrm{\%}}\right) $$ | Flame Graph |
scaling_read_entity_linkless
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| entity_by_id | 1 entities | $$1.95 \mathrm{ms} \pm 7.24 \mathrm{μs}\left({\color{gray}0.772 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 100 entities | $$2.15 \mathrm{ms} \pm 5.89 \mathrm{μs}\left({\color{gray}1.60 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 10 entities | $$1.95 \mathrm{ms} \pm 4.38 \mathrm{μs}\left({\color{gray}1.46 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 1000 entities | $$2.93 \mathrm{ms} \pm 13.7 \mathrm{μs}\left({\color{gray}2.19 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 10000 entities | $$13.5 \mathrm{ms} \pm 54.6 \mathrm{μs}\left({\color{red}32.5 \mathrm{\%}}\right) $$ | Flame Graph |
scaling_read_entity_complete_zero_depth
| Function | Value | Mean | Flame graphs |
|---|---|---|---|
| entity_by_id | 50 entities | $$4.19 \mathrm{ms} \pm 38.4 \mathrm{μs}\left({\color{gray}4.00 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 5 entities | $$1.94 \mathrm{ms} \pm 9.12 \mathrm{μs}\left({\color{gray}-0.376 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 1 entities | $$1.95 \mathrm{ms} \pm 10.1 \mathrm{μs}\left({\color{gray}1.30 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 10 entities | $$2.12 \mathrm{ms} \pm 12.6 \mathrm{μs}\left({\color{gray}-0.813 \mathrm{\%}}\right) $$ | Flame Graph |
| entity_by_id | 25 entities | $$3.28 \mathrm{ms} \pm 9.09 \mathrm{μs}\left({\color{gray}-0.381 \mathrm{\%}}\right) $$ | Flame Graph |