Add Modal (serverless GPU provider) example with advanced features (config.toml)
Update the app.py to show how to setup the config.toml with Modal
I already have it working. Let me know if you are interested in PR.
Please reply with a 👍 if you want this feature.
For anyone interested in contributing, I suggest checking out the documentation on persisting volumes in Modal: https://modal.com/docs/guide/volumes#persisting-volumes for more details. Alternatively, you can consider copying the config.toml during the image building process in Modal.
The app.py file itself serves as a minimal example to run Tabby on Modal, so it may not be ideal to update it directly. But an example under experimental/ is certainly good.
Hey can I have this?
I haven't managed to use the persisting-volumes (it gives an error that it can not write on a lazy file system or so), but if you index the repos at the image build stage, you can do it like this:
# Afterwords, run: tabby scheduler --now
def render_config():
return """# Index three repositories' source code as additional context for code completion.
[[repositories]]
name = "tabby"
git_url = "https://github.com/TabbyML/tabby"
"""
def scan_repos():
import subprocess
subprocess.run(
[
"/opt/tabby/bin/tabby",
"scheduler",
"--now"
]
)
def write_config():
with open("/data/config.toml", "w") as f:
f.write(render_config())
f.close()
image = (
Image.from_registry(
IMAGE_NAME,
add_python="3.11",
)
.dockerfile_commands("ENTRYPOINT []")
.run_function(download_model)
.run_function(write_config)
.run_function(scan_repos)
.pip_install("asgi-proxy-lib")
)