Tens of millions tables causes performance degradation
I have a data set containing billions of collection points. Because they have the same data structure, I created a super table and one table for each collection point. At the beginning, when there are only hundreds of tables, I can query the data of a collection point in tens of milliseconds, but when the number of tables rises to tens of millions, it takes 3-4 seconds to query the data of a collection point. How should I optimize? Enviroment: Aliyun DS4 v2 ECS, 8 core CPU and 64GB Memory
May I know your TDengine version ?
And table structure (describe tbname)
- version: 2.0.18.0
- STable Structure:
CREATE STABLE IF NOT EXISTS nadc.obs (start_time TIMESTAMP,exp_time DOUBLE,flux DOUBLE ,flux_err DOUBLE) TAGS (src_id BIGINT,energy_start DOUBLE ,energy_end DOUBLE ,instru_NAME NCHAR(100),instru_id BIGINT); - Table Structure:
CREATE TABLE IF NOT EXISTS nadc.src12222933_Gaia_rp USING nadc.obs TAGS(12222933,4.0,10.0,\'Gaia_rp\',55);
2.x 版本目前官方已经不维护支持了,请迁移升级到最新的 3.x 版本吧。操作手册为:https://www.taosdata.com/tdengine-engineering/17753.html。3.x 和 2.x 相比是全方位更加优越的,关于3.x 的主要特性可以结合这篇文章和官方文档一起了解:https://www.taosdata.com/tdengine-engineering/21550.html
开源版3.x 支持操作系统参考:https://docs.taosdata.com/reference/support-platform/