Feature/move to fastify
Description:
Moved api-gateway on nest + fastify platform
Related issue(s):
Checklist
- [x] Checked on localhost
- [x] Tested (unit, integration, etc.)
Unit Test Results
111 files ±0 220 suites ±0 48m 3s :stopwatch: + 3m 21s 192 tests ±0 192 :heavy_check_mark: ±0 0 :zzz: ±0 0 :x: ±0 195 runs ±0 195 :heavy_check_mark: ±0 0 :zzz: ±0 0 :x: ±0
Results for commit 02a77e3e. ± Comparison against base commit f9c57aaf.
:recycle: This comment has been updated with latest results.
@mattsmithies As discussed in today's Tech Meet, wanted to loop you in this discussion.
@mattsmithies As discussed in today's Tech Meet, wanted to loop you in this discussion.
Hello @mattsmithies I heard that you would like us to prioritize database optimizations over fastify. I'm glad to inform you that our team is constantly thinking about this and seeking solutions. We're almost done with transitioning to the Fastify platform. Testing and bug fixing are currently in the final stages. I hope that next week we'll merge this pull request into the develop branch. In addition, we've implemented request caching to offload the database. Currently, we're working on an improvement by adding invalidation. This will make us even more flexible, allowing us to wrap more requests in cache interceptor and make our platform even faster. We've also put in a lot of effort and transitioned to ESM (ECMAScript Modules) scripts, which will enable us to be compatible with more modern external modules in the future. And of course, we've optimized our Docker build, which allows for faster and more convenient work with the platform. We are fully focused on finding new solutions for optimization and would be glad to hear any suggestions from you. Discussing them would be a great honor for the team.
Hey @ihar-tsykala, thank you for bringing me into this thread. I appreciate your efforts in understanding my perspective on these developments.
While I recognize the potential benefits of integrating the Fastify framework for improving performance, I remain a little skeptical about its impact on the current needs of the community. Typically, optimizations like these are most beneficial for systems handling millions of transactions, aiming to scale significantly. Nonetheless, the ultimate goal is to prepare Guardian for such scalability.
Regarding the prioritization of a robust caching process at this stage, I understand that it presents a quick win. However, considering the relatively low throughput our Guardian systems currently experience, optimizing database interactions might not be the most immediate bottleneck. It's noteworthy that cache invalidation, as referenced in #3641, remains a challenge due to the stateful nature of the API.
I want to provide some context on my recent work with Guardian, which I've been closely involved with since its alpha phase nearly three years ago. We are now developing our third bespoke API client specifically tailored for Guardian's unique needs.
This week, I plan to create extensive documentation addressing these challenges. The complexity required for effective use of the Guardian API is substantial and often not fully appreciated.
This is going to be combined with a new client that focuses on a partial "Dryrun SDK" with an end-to-end testcase, I'm going to do my best to highlight particular areas that should be optimised for an API consumption point of view.
A core focus of mine has been minimizing the data transmitted over the network—essentially optimizing API scalability by adhering as closely as possible to REST principles. The ideal is for the API to handle operations on a single asset or a million with equal efficiency, avoiding N+1 query issues that can degrade performance gradually.
For instance, last year’s integration of ELV credits revealed significant scalability issues; as the system scaled, data handling became increasingly inefficient due to N+1 problems with each new asset or verification approval -- this slowed the system to a halt.
To combat this, we've shifted towards more RESTful practices by using filter blocks. Although this isn't a perfect solution and current invalidation challenges for filter blocks still lead to N+1 issues, it's a step in the right direction.
I am more than willing to join any calls to discuss these points further. My stance is simple: focus on getting the foundational aspects right to ensure the API's functionality and scalability. I'm looking forward to potentially testing this with hundreds of thousands of users representing millions of assets in a simulated environment, and eventually on mainnet, considering it offers more bandwidth than the testnet.
Internally at DOVU we are committed to building middleware systems that will alleviate a lot of this requirement on the Guardian team itself, so tools that are specifically for teams and developers like ourselves, who want to primarily focus on API usage.