yusongyan
yusongyan
> This seems to have gone silent? I already have the fix, and it is in the process of reviewing. Sorry for the delay!
Duplicate of https://github.com/yugabyte/yugabyte-db/issues/17847
**Root cause of the read timeout issue(existed since day 0)** Current read deadline is only checked when processing subdocs of a row. However, in certain cases, before we even process...
Tested on tserver with 4000 tables, and 14000 tablets. Each scrape took approximately 15 seconds when using the default metric scraping URL parameters: `/prometheus-metrics?show_help=false&priority_regex=rocksdb_(number_db_(next|seek|prev)|db_iter_bytes_read|block_cache_(add|single_touch_add|multi_touch_add)|current_version_(sst_files_size|num_sst_files)|db_([^_]+_micros_[^_]+|mutex_wait_micros)|block_cache_(hit|miss)|bloom_filter_(checked|useful)|stall_micros|flush_write_bytes|compact_[^_]+_bytes|compaction_times_micros_[^_]+|numfiles_in_singlecompaction_[^_]+)|mem_tracker_(RegularDB_MemTable|IntentsDB_MemTable)|mem_tracker_server_PerTablet_(RegularDB_MemTable|IntentsDB_MemTable)|mem_tracker_server_Tablets_overhead_PerTablet_(RegularDB_MemTable|IntentsDB_MemTable)|async_replication_[^_]+_lag_micros|consumer_safe_time_[^_]+|transaction_conflicts|majority_sst_files_rejections|expired_transactions|log_(sync_latency_[^_]+|group_commit_latency_[^_]+|append_latency_[^_]+|bytes_logged|reader_bytes_read|cache_size|cache_num_ops)|follower_lag_ms|[^_]+_memory_pressure_rejections|log_wal_size|ql_read_latency_[^_]+|(all|write)_operations_inflight|ql_write_latency_[^_]+|write_lock_latency_[^_]+|is_raft_leader|ts_live_tablet_peers ` During scraping, perf record was...
Block exporting metrics at both the server and table levels(empty output) took approximately 3 seconds to complete using the following URL parameters:`/prometheus-metrics?show_help=false&version=v2&table_blocklist=ALL&server_blocklist=ALL`
The `PrometheusWriter::WriteSingleEntry` map allocation stack from the above flamegraph originates from this `code:` ``` MetricEntity::AttributeMap new_attr = attr; new_attr.erase("table_id"); new_attr.erase("table_name"); new_attr.erase("table_type"); new_attr.erase("namespace_name"); ``` This GitHub issue will track the commit...
A `rpcs_in_queue_+` metric is created within each of those services’ ServicePool. The reason are seeing these “duplicated metrics” is because, those services all have the same service_name `yb.master.MasterService` ``` std::unique_ptr...
[Here](https://github.com/yugabyte/yugabyte-db/blob/9f82c017b59faa674364494e8630189567e1df52/src/yb/tablet/transaction_loader.cc#L101) is where executor is destroy. ``` void LoadFinished(Status load_status) EXCLUDES(status_resolvers_mutex_) override { ... start_latch_.Wait(); ``` start_latch_.Wait() is released after tablet bootstrap finish.