Screeni-py icon indicating copy to clipboard operation
Screeni-py copied to clipboard

Build a API system for managing the AI functionalities

Open rexdivakar opened this issue 2 years ago • 7 comments

Hey @pranjal-joshi I have been testing ur application and noticed the package side has been pretty huge due to the addition of tensorflow to the binary, instead of using the same i would recommend to build a API system which can serve the API requests for AI based requests thus enabling seamless performance and speed over the longer run.

If you like my proposal, kindly let me know so i can work with you in building the setup.

rexdivakar avatar Sep 26 '22 03:09 rexdivakar

Hi @rexdivakar

Thanks for your feedback. I've also observed a massive filesize and a long start-up time by inclusion of TensorFlow binaries into the executables. However, as this is an open-source project, I am looking forward to deploying the model on a "Permanent" Free-tier endpoint as I'm not monetizing this product.

Looking forward to getting your suggestions/code collaboration if we can move this model to a cost-free inference endpoint.

pranjal-joshi avatar Sep 26 '22 04:09 pranjal-joshi

Hey @pranjal-joshi I have some suggestions below, let me know which one would suit your ideas the best,

  1. Build a API endpoint interacts with our model file and returns the response upon request. (I have a server which you can use it for free, we can build a SFTP pipeline and dump the model to automatically use the latest version of the trained model)
  2. Host the model in a GitHub and dump it on demand from client side. (Note: this approach is slow and platform dependent affecting the performance of the results in runtime)

rexdivakar avatar Sep 26 '22 10:09 rexdivakar

This Issue is marked as Stale due to Inactivity. This Issue will be Closed soon.

github-actions[bot] avatar Oct 11 '22 11:10 github-actions[bot]

Hey keep this open

rexdivakar avatar Oct 11 '22 12:10 rexdivakar

Hi @rexdivakar

Thanks for your suggestions.

To keep this project maintainable and truly open-source, Can you elaborate following point?

  1. Host the model in a GitHub and dump it on demand from client side. (Note: this approach is slow and platform dependent affecting the performance of the results in runtime)

pranjal-joshi avatar Oct 11 '22 12:10 pranjal-joshi

So we can just train the model and save the pkl file on to the github and dump the pkl model to client side and setup a local API endpoint model instead of installing tensorflow directly onto the clients machine.

rexdivakar avatar Oct 11 '22 13:10 rexdivakar

I believe the same method is implemented in v1.38 If you download binary files from the release, you don't need to set up standalone TensorFlow development on your system. The binary will unpack the entire environment in real-time (which significantly increases app startup time!) @rexdivakar

pranjal-joshi avatar Oct 12 '22 12:10 pranjal-joshi

This Issue is marked as Stale due to Inactivity. This Issue will be Closed soon.

github-actions[bot] avatar Oct 28 '22 11:10 github-actions[bot]