Refactor to support Native and Remote variants
Motivation
It should be possible to switch the machine learning backend while keeping the unreal frontend api the same, allowing for a very flexible dev environment. By specifying an abstract UMachineLearningBaseComponent (https://github.com/getnamo/machine-learning-remote-ue4/blob/master/Source/MachineLearningBase/Public/MachineLearningBaseComponent.h) we can sub-class these components into remote, unreal python, and native variants. Depending on which backend is needed these should be swappable without any dev code changes.
From the server side we can also specify a base MLPluginAPI which wouldn't be tensorflow specific. This would open up Pytorch backends without any unreal frontend code change.
Having a remote server component would also enable linux/mac builds without restricting python/tensorflow versions to compatible ones with unrealenginepython. It would also enable remote ML on e.g. phone devices (native may support TFLite at some point).
Remote work
- Plugin - https://github.com/getnamo/machine-learning-remote-ue4 - first build issue: https://github.com/getnamo/machine-learning-remote-ue4/issues/1
- Python host server - https://github.com/getnamo/ml-remote-server - first build issue: https://github.com/getnamo/ml-remote-server/issues/1
Native work Intended to be inference focused initially
- https://github.com/getnamo/tensorflow-native-ue4 - first build issue: https://github.com/getnamo/tensorflow-native-ue4/issues/1
Tensorflow-ue4 Refactor
- Use the
UMachineLearningBaseComponentfor this plugin's base class, enabling C++ and compatibility with other variants.
remote variant working, native pending
Early refactor release with auto-server launch support: https://github.com/getnamo/tensorflow-ue4/releases/tag/1.0.0alpha2