dn-institute icon indicating copy to clipboard operation
dn-institute copied to clipboard

Add similarity search batch processing and tests

Open unicoder88 opened this issue 1 year ago • 15 comments
trafficstars

Hi! This pull request adds ability to pass multiple texts in single request, but also is compatible with single-text format.

Solves issues https://github.com/1712n/dn-institute/issues/431 and https://github.com/1712n/dn-institute/issues/430

Considerations taken:

  • max input items for Workers AI - https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5/#api-schema

    Max input tokens: 512

    API Schema suggests "maxItems": 100 Uncertain which is correct, but this is configurable as an environment variable MAX_INPUT, leaving 100 as a default

  • Query vectors doesn't seem to provide a batch support, so I opted to run all requests in parallel with Promise.all - all requests must succeed. An alternative is Promise.allSettled that can tolerate individual request errors, but then we'll have to come up with error format per item, which isn't worth it for now

Screenshot 2024-05-19 at 17 19 19 Screenshot 2024-05-19 at 17 18 22

In addition, basic high level integration tests were added.

unicoder88 avatar May 19 '24 15:05 unicoder88

@unicoder88, what's the impact of batching on performance/costs in duplicate-heavy use cases?

evgenydmitriev avatar May 20 '24 09:05 evgenydmitriev

@evgenydmitriev , I've added unique text processing and updated tests. Only unique texts are sent to workers, but results are assigned back to original inputs. This way duplicate-heavy use cases should still consume inbound traffic, but not increase Cloudflare usage costs.

unicoder88 avatar May 20 '24 18:05 unicoder88

Users creating batches of identical messages is a good consideration, but my question wasn't about duplicates within a batch. A much bigger duplicate problem might show up across multiple requests/users. Where do you think the biggest costs are? How does your batching method affect those costs (if at all)?

evgenydmitriev avatar May 21 '24 11:05 evgenydmitriev

Thanks @evgenydmitriev . Looks like the most expensive part is Vectorize calls. I'm intending to add Cloudflare cache API over each individual vector.

Price given 1000 requests:

Workers AI

https://developers.cloudflare.com/workers-ai/platform/pricing/

$0.011 / 1,000 Regular Twitch Neurons (also known as Neurons).

https://ai.cloudflare.com/#pricing-calculator Screenshot 2024-05-22 at 09 08 28

Price $0.011 / 1,000 Neurons 768 tokens * 1000 requests = 970 Neurons = $0,0011 when over free limit

Vectorize

https://developers.cloudflare.com/vectorize/platform/pricing/ Price $0.040 per million

Queried Vector Dimensions: The total number of vector dimensions queried. If you have 50,000 vectors with 768-dimensions in an index, and make 1000 queries against that index, your total queried vector dimensions would sum to ((50000 + 1000) * 768) = 39.168 million = $1,567 (UPD. meant price here is one and a half dollar per 1000 requests, so $1.567)

unicoder88 avatar May 22 '24 07:05 unicoder88

image

@unicoder88 your pricing math is off, and I personally think that manually implementing caching is almost always an overkill, but you are thinking in the right direction. There might be an easier solution, though.

evgenydmitriev avatar May 22 '24 07:05 evgenydmitriev

Actually, you're right @evgenydmitriev ! Obviously, a database is a better and more straightforward solution than distributed cache here, so now I'm planning to use a D1 to create a schema, then query and insert results there.

D1 pricing looks tasty:

Rows written | 100,000 / day | First 50 million / month included + $1.00 / million rows

In worst case of all new vectors and free writes exhausted, 1000 writes will cost $0.001.

unicoder88 avatar May 22 '24 12:05 unicoder88

I'm not sure D1 is necessarily cheaper than Cloudflare cache, but it might be, depending on the usage. My argument was about avoiding manual caching (whatever the data store you might want to choose). There's a much simpler solution that is already implemented for the worker, but will be broken with your batching approach.

evgenydmitriev avatar May 22 '24 13:05 evgenydmitriev

Hi @evgenydmitriev ! I have some updates to bring up to you.

Firstly, worker limits:

  • max 6 simultaneous outgoing fetch requests
  • 128 MB memory
  • Duration - no limit
  • CPU 30 s HTTP request

Possible bottleneck could be 6 simultaneous requests, while this batching method is attempting to execute up to 100 requests, the rest should be throttled.

However, after I've built the whole system with same batching approach, and results look better than expected. Execution looks like this: Screenshot 2024-05-24 at 22 39 43

The setup:

  • took some 100k texts database https://www.kaggle.com/datasets/ashishkumarak/netflix-reviews-playstore-daily-updated
  • inserted 10k vectors into Vectorize index using same AI model
  • ran searches in different combinations

Processing times, however, do not show considerable slowdown no matter concurrency or duplicates:

  • Batch search 36 random texts - AI 900 ms, Vectorize 2 s
  • 72 texts - AI 2 s, Vectorize 3 s
  • 100 texts - AI 1-4 s, Vectorize 4-5 s

Parallel execution:

  • 2 parallel executions x search 100 random texts - AI 2 s, vectorize 5 s
  • 10 parallel executions x search 100 random texts - AI 0.6-3 s, vectorize 6-7 s
  • 10 parallel executions x search 100 duplicate texts - AI 0.5-4, vectorize 6-7 s

CPU time also looks good: image

I would remove any duplicates checking for simplicity, and any caching because 768 dimensional vectors look unlikely to match exactly between different texts. Also, I would improve input checks for non-empty text, because empty text in a batch breaks whole batch.

I would appreciate any further hints.

unicoder88 avatar May 24 '24 20:05 unicoder88

Thanks for the detailed test results. You correctly identified the Workers AI embedding models being the cost bottleneck in one of your previous comments, even accounting for the pricing calculation error. You also correctly identified the need for a caching mechanism there. The question is finding an easier and cheaper way of doing it than manually attaching a data store and querying every incoming message against it. There's an easier solution that's already implemented in the single message approach, but will be broken if we were to implement your batching mechanism as is.

evgenydmitriev avatar May 24 '24 21:05 evgenydmitriev

@evgenydmitriev, It would be nice to have URL like this - GET https://www.example.com/some-namespace?text=some%20text, and then let Cloudflare CDN cache those. I've tried this, and it doesn't work because workers are run before cache.

What I tried:

image

  • attaching the worker to a Custom Domain
  • adding a bunch of Cache-Control headers to response
    c.res.headers.append("Cache-Control", "public, max-age=14400, s-maxage=84000, stale-if-error=60")
    c.res.headers.append("Cloudflare-CDN-Cache-Control", "max-age=24400")
    c.res.headers.append("CDN-Cache-Control", "max-age=18000")
    

unicoder88 avatar May 27 '24 08:05 unicoder88

@unicoder88, could you clarify what it is that you are trying to cache? HTTP requests containing message batches?

evgenydmitriev avatar May 27 '24 09:05 evgenydmitriev

@evgenydmitriev, well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL. Then either have clients initiate multiple parallel requests on their own if necessary (Cloudflare will take care of scaling), or on top of it come up with some sort of gateway that takes a batch and dispatches to this worker. Latter is unclear yet.

unicoder88 avatar May 27 '24 09:05 unicoder88

well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL.

Isn't this what the current worker does? You do not need a custom domain to cache HTTP requests to Cloudflare Workers.

evgenydmitriev avatar May 27 '24 10:05 evgenydmitriev

Now it's a POST body, so shouldn't be cacheable by default. In this prototype, I converted to GET (with no luck admittedly 😀)

unicoder88 avatar May 27 '24 10:05 unicoder88

Hi @evgenydmitriev , please review updates.

unicoder88 avatar May 29 '24 07:05 unicoder88