Rafael Cárdenas
Rafael Cárdenas
Following our plan to move to microservices with an event-driven architecture, we should consider the use of [Kafka](https://kafka.apache.org/) to emit events that all other services could listen for.
Rosetta endpoints have their own logic in terms of structure, call flow, etc. They're mostly read-only endpoints with the exception of transaction broadcasts which should go directly to the new...
This would be handled only by nginx reverse proxy configs: https://github.com/hirosystems/devops/issues/983 Nginx will also handle any necessary logging for transaction tracing
Since we want to expand the capabilities of our websocket events, it would be a better idea to spin them off into a separate service that can handle its own...
### Background Our Stacks API currently exposes [a few endpoints](https://docs.hiro.so/api#tag/Fungible-Tokens/operation/get_contract_ft_metadata) that attempt to serve FT metadata. This data is pulled by a background queue that listens to new contracts as...
We currently have two cache handlers that we use to create ETags for most of our endpoints: 1. **Chain tip**: Latest block/microblock hash 2. **Mempool**: Hash digest of all non-pruned...
The current `import-events` process takes ~18 hours to complete in the current production deployments. It requires the importing of the complete node event TSV into our tables while calculating re-orgs,...
When a transaction is re-orgd because a new block came by which made it non-canonical, it is normally thrown back into the mempool so we show it again as `pending`....
We've been having some replication problems recently which are reported by postgres as "deadlocks" on certain queries. In order for us to be able to try other replication modes, we...