Screeni-py
Screeni-py copied to clipboard
Build a API system for managing the AI functionalities
Hey @pranjal-joshi I have been testing ur application and noticed the package side has been pretty huge due to the addition of tensorflow to the binary, instead of using the same i would recommend to build a API system which can serve the API requests for AI based requests thus enabling seamless performance and speed over the longer run.
If you like my proposal, kindly let me know so i can work with you in building the setup.
Hi @rexdivakar
Thanks for your feedback. I've also observed a massive filesize and a long start-up time by inclusion of TensorFlow binaries into the executables. However, as this is an open-source project, I am looking forward to deploying the model on a "Permanent" Free-tier endpoint as I'm not monetizing this product.
Looking forward to getting your suggestions/code collaboration if we can move this model to a cost-free inference endpoint.
Hey @pranjal-joshi I have some suggestions below, let me know which one would suit your ideas the best,
- Build a API endpoint interacts with our model file and returns the response upon request. (I have a server which you can use it for free, we can build a SFTP pipeline and dump the model to automatically use the latest version of the trained model)
- Host the model in a GitHub and dump it on demand from client side. (Note: this approach is slow and platform dependent affecting the performance of the results in runtime)
This Issue is marked as Stale due to Inactivity. This Issue will be Closed soon.
Hey keep this open
Hi @rexdivakar
Thanks for your suggestions.
To keep this project maintainable and truly open-source, Can you elaborate following point?
- Host the model in a GitHub and dump it on demand from client side. (Note: this approach is slow and platform dependent affecting the performance of the results in runtime)
So we can just train the model and save the pkl file on to the github and dump the pkl model to client side and setup a local API endpoint model instead of installing tensorflow directly onto the clients machine.
I believe the same method is implemented in v1.38
If you download binary files from the release, you don't need to set up standalone TensorFlow development on your system.
The binary will unpack the entire environment in real-time (which significantly increases app startup time!)
@rexdivakar
This Issue is marked as Stale due to Inactivity. This Issue will be Closed soon.