Burr Serving Framework
Is your feature request related to a problem? Please describe. We have some examples of serving in FastAPI. They all follow the same pattern. We should be able to automate this (to some extent):
Describe the solution you'd like A few thoughts:
In FastAPI:
burr_app = Application()....build()
app = FastAPI()
burr_fastapi.post("/api/v0/...", app, burr_app, halt_after=..., halt_before=..., ..., response_model=CustomResponseModel)
burr_fastapi.post_streaming("/api/v0/...", app, burr_app, halt_after=..., halt_before=..., ..., response_model=CustomResponseModel)
Then CustomResponseModel has to be instantiatable from state dict (E.G. be a subset/have the right defaults).
Could also decorate functions -- these inject the results in
burr_app = Application()....build()
app = FastAPI()
@burr_fastapi.post("/api/v0/...", app, burr_app, halt_after=..., halt_before=..., ...)
def my_burr_endpoint(burr_state, burr_action, burr_result) -> MyResponse:
return MyResponse(...)
Note that we could also derive it if we set up the fields we expose in state + add #139 -- this is something we'd want to do later, but...
burr_app = ...
app = FastAPI()
burr_fastapi.post("/api/v0/...", app, burr_app, halt_after=..., halt_before=..., ...)
This could automatically derive the response model based on types statically inferred + halt_before/halt_after, although it's a little complicated...
Describe alternatives you've considered Can do custom like we do now.
Additional context As asked on discord -- this is part 1
My impression here is that a lot of developers are familiar with FastAPI (more than for Burr) and there's many learning resources available for it.
Creating a new "Burr API for FastAPI" could add cognitive load and limit the ability for people to reuse their existing FastAPI snippets and templates. Something like this "production ready template" may be more valuable? It could include patterns to persist, track, auth, etc.
API
Data Model
Single return for pretty much everything:
class TypedStateWithNextMethod:
state: GeneratedStatePydanticModel
next_method: enum
API
API endpoints that are generated:
POST getOrCreate(app_id, partition_key) -> TypedStateWithNextMethod
GET exists(app_id, partition_key) -> TypedStateWithNextMethod
GET list(app_id=None, partition_key=None) -> TypedStateWithNextMethod
# For each endpoint
POST node_name(app_id, partition_key, GeneratedNodeInputModelForNode) -> TypedStateWithNextMethod
# Later
POST fork(app_id, new_app_id, partition_key, state_overrides: ???) -> TypedStateWithNextMethod
To think about
- [ ] Authentication
- [ ] ACLs
- [ ] Customizing API flavor (I.E. OpenAI compatibility?)