mysqld_exporter
mysqld_exporter copied to clipboard
info_schema.tables query causes mysqld crash on 5.6 if executed during crash recovery
mysqld_exporter version: output of mysqld_exporter --version
prom/mysqld-exporter:v0.10.0
MySQL server version
Percona Server 5.6 (percona/5.6
image, currently 5.6.47
)
mysqld_exporter command line flags
/bin/mysqld_exporter
(none)
What did you do that produced an error?
We suffered a node failure and a mysqld had to do a crash recovery.
What did you expect to see?
Successful recovery, node goes green.
What did you see instead?
Mysqld crashed with the following:
2020-09-13 02:54:04 1 [Note] mysqld: ready for connections.
Version: '5.6.47-87.0' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 87.0, Revision 9ad342b
02:54:07 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at https://bugs.percona.com/
key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=1
max_threads=153
thread_count=1
connection_count=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69062 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x29932c0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fc74a6a1d40 thread_stack 0x30000
mysqld(my_print_stacktrace+0x3b)[0x8ef3eb]
mysqld(handle_fatal_signal+0x471)[0x663bd1]
/lib64/libpthread.so.0(+0xf630)[0x7fc7964fa630]
mysqld(_Z29page_find_rec_max_not_deletedPKh+0xb0)[0x9ab280]
mysqld[0x9f5bdc]
mysqld[0x9447bf]
mysqld[0x94b1fe]
mysqld(_ZN7handler7ha_openEP5TABLEPKcii+0x33)[0x5a3403]
mysqld(_Z21open_table_from_shareP3THDP11TABLE_SHAREPKcjjjP5TABLEb+0x6bc)[0x76f77c]
mysqld(_Z10open_tableP3THDP10TABLE_LISTP18Open_table_context+0x1116)[0x699156]
mysqld(_Z11open_tablesP3THDPP10TABLE_LISTPjjP19Prelocking_strategy+0x6c5)[0x6a1155]
mysqld(_Z30open_normal_and_derived_tablesP3THDP10TABLE_LISTj+0x60)[0x6a1a60]
mysqld[0x717629]
mysqld(_Z14get_all_tablesP3THDP10TABLE_LISTP4Item+0x699)[0x72a2f9]
mysqld(_Z24get_schema_tables_resultP4JOIN23enum_schema_table_state+0x2da)[0x72ae1a]
mysqld(_ZN4JOIN14prepare_resultEPP4ListI4ItemE+0xa5)[0x70b2a5]
mysqld(_ZN4JOIN4execEv+0x15c)[0x6c480c]
mysqld(_Z12mysql_selectP3THDP10TABLE_LISTjR4ListI4ItemEPS4_P10SQL_I_ListI8st_orderESB_S7_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x275)[0x70fbf5]
mysqld(_Z13handle_selectP3THDP13select_resultm+0x195)[0x7104b5]
It also logged that connection 1 was running the following query:
SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE, ifnull(ENGINE, 'NONE') as ENGINE, ifnull(VERSION, '0') as VERSION, ifnull(ROW_FORMAT, 'NONE') as R
OW_FORMAT, ifnull(TABLE_ROWS, '0') as TABLE_ROWS, ifnull(DATA_LENGTH, '0') as DATA_LENGTH, ifnull(INDEX_LENGTH, '0') as INDEX_LENGTH, ifnull(DATA_FREE, '0') as DATA_FREE,
ifnull(CREATE_OPTIONS, 'NONE') as CREATE_OPTIONS FROM information_schema.tables WHERE TABLE_SCHEMA = '<my schema>'
Adding --collect.info_schema.tables=false
allowed the server to boot up.
Following the page_find_rec_max_not_deleted
log line I found this and have commented on its new location in the Percona Jira.
We have plenty of other mysql servers who currently have this metric collection enabled so it seems to be that this is a race condition with crash recovery - this server has a lot of tables and the query isn't super performant, and the other servers have no issues booting.
Thanks!