reviewer icon indicating copy to clipboard operation
reviewer copied to clipboard

Scaling for wider public (runtime and gpt calls)

Open webbertakken opened this issue 2 years ago • 1 comments

Context

Runtime needs to be very scalable, so that many requests can be made. Potentially thousands of installations will apply to even more repositories. About half of them might have regular commits (triggering pull_request.synchronize hook).

We need to be sure that both the runtime and the LLM backend can take some load.

Suggested solution

Whichever works to be fair.

Considered alternatives

  • Keep prototyping on a very small scale (which won't help create traction)

webbertakken avatar Jul 08 '23 12:07 webbertakken

Runtime is workers. Backend is GPT, which should be relatively scalable.

Just need to figure out if donations will cover the costs. Or otherwise find more sponsors.

webbertakken avatar Jul 15 '23 08:07 webbertakken