[FEATURE] Switch from Deeplearning4j to Brain4j
Describe your feature request.
I would like to propose adding optional support for Brain4J as a lightweight and high-performance alternative for potential machine learning features within LiquidBounce — such as behavioral pattern detection, predictive automation, or other AI-based modules.
Brain4J is a minimal, flexible, and fast deep learning library for Java, designed to provide a simple way to train and run ML models without the heavy dependencies of traditional frameworks.
Why Brain4J?
- 🧠 Lightweight: The entire library is under 10 MB, making it far smaller than TensorFlow, PyTorch, or Deeplearning4J.
- ⚡ Performance-focused: Brain4J includes low-level optimizations, such as SIMD support and inference optimization, offering solid runtime performance even on CPU.
- 🧩 Easy integration: It’s pure Java and doesn’t require complex native bindings or external dependencies — ideal for LiquidBounce’s modular architecture.
- 🧮 GPU acceleration (in progress): The development team is currently working on GPU support, which could bring additional performance gains in the future.
- 🔧 Actively developed: The wiki and roadmap show a well-structured architecture, installation guides, and detailed usage examples, with an open call for community contributions.
Potential Benefits for LiquidBounce
- Reduce client size by avoiding large ML frameworks.
- Enable ML-powered features (e.g., pattern recognition, automated behaviors) with minimal overhead.
- Improve portability for lightweight or embedded environments.
- Allow experimentation with AI modules in a modular, opt-in fashion.
- Strengthen collaboration between open-source communities (LiquidBounce × Brain4J).
Additional context
No response
least obvious AI post LOL
AI issue
And this lib is of Java 25. we can't use.
Update: The author of the library downgraded to Java 21.
no
this just looks like a straight downgrade of what we currently have doesnt even have gpu accel
this just looks like a straight downgrade of what we currently have doesnt even have gpu accel
you don't need GPU acceleration for smoothing rotations
this just looks like a straight downgrade of what we currently have doesnt even have gpu accel
you don't need GPU acceleration for smoothing rotations
you also dont need ai for it either yet here we are
you also dont need ai for it either yet here we are
Feel free to improve the traditional rotation system and keep it updated.
you also dont need ai for it either yet here we are
Feel free to improve the traditional rotation system and keep it updated.
ehh only thing that detects it is polar and polar will false anyways
this just looks like a straight downgrade of what we currently have doesnt even have gpu accel
in the 3.0 it does actually (which is almost finished) + it has more architectures than just simple MLPs (such as LSTMs, RNNs, GCNs, CNNs and Transformers)
this just looks like a straight downgrade of what we currently have doesnt even have gpu accel
in the 3.0 it does actually (which is almost finished) + it has more architectures than just simple MLPs (such as LSTMs, RNNs, GCNs, CNNs and Transformers)
i doubt anyone is going to migrate LB, your best bet if you want it to happen is probably to migrate it yourself