rfcs
                                
                                
                                
                                    rfcs copied to clipboard
                            
                            
                            
                        add error_record_table
Better name are welcome!
After some extra thinking, I would like to propose to use a single error_table rather than one error_table per operator.
As mentioned in the RFC, our targets include
- We can ensure that our storage engine can handle the volume of erroneous data, as it is of the same magnitude as the source.
 - We can ensure the error records are durable.
 - Users can view the error records directly over psql.
 - Users can reproduce the error easily by the similar SQL.
 
In short, turning error_table per operator into a single error_table will basically enhance 3 a lot and slightly impair 4. Additionally, it avoids introducing lots of error_tables, which will be confusing for most people that don't really care errors; and keeps backward compatibility.
I'll explain these pros and cons one by one.
"Enhance 3: Users can view the error records directly over psql"
In the single-table approach, the users only need to query the rw_error_table and he will get all errors happened recently. This is quite intuitive to anyone.
Instead, in the error_table-per-operator fashion, we might need to provide an extra 'view' for that purpose. Note that this is not an actual 'view' because when a new materialized view appears, the view needs to be updated accordingly, which is somehow dirty to me.
"Slightly impair 4: Users can reproduce the error easily by the similar SQL."
In most cases, users don't need to really reproduce it in RisingWave (except it's RW's bug, but that's another story). For example, as a user, if I encountered a "JSON Parse" exception, It would be enough to tell me the record id and I can check the JSON in upstream by myself. Furthermore, I'll try to fix that data and optionally emit the record into Kafka again to let RisingWave consume it. Reproducing it in RisingWave side doesn't help me a lot.
"It avoids introducing lots of error_tables"
This is ugly both from user side and for us.
From user side, our SHOW ALL TABLES command is supposed to show all internal state tables. It just looks confusing if more than half of them are xxx_error_record_tables.
For us, we now assume a table is one of these 3 types: tables, materialized views and streaming states. The xxx_error_record_tables will be a new kind and creates lots of related things: compaction group, monitoring metrics, etc.
"Keeps backward compatibility"
Because their will be only one error_table, we can simply hard-code this table in system catalog. That means that we don't need to change the plan node of existing fragments, limiting the changes only in operator's code implementation.
Finally, since we have discussed that we will write into "error_record_tables" with blind write and singleton distribution (which means the writers aren't aware of any distribution, simply wirte with a random unique ID), both approach have similar complexity in terms of the core implementation (i.e. write-path).
"Enhance 3: Users can view the error records directly over psql"
In the single-table approach, the users only need to query the rw_error_table and he will get all errors happened recently. This is quite intuitive to anyone.
Instead, in the error_table-per-operator fashion, we might need to provide an extra 'view' for that purpose. Note that this is not an actual 'view' because when a new materialized view appears, the view needs to be updated accordingly, which is somehow dirty to me.
For 3. I didn't quite get it. Why don't we just CREATE VIEW to simulate the rw_error_table which contains all recent errors. Rather than CREATE MATERIALIZED VIEW. This runs the computation adhoc, rather than incrementally?
Finally, since we have discussed that we will write into "error_record_tables" with blind write and singleton distribution (which means the writers aren't aware of any distribution, simply write with a random unique ID), both approach have similar complexity in terms of the core implementation (i.e. write-path).
How do we synchronize access to the state table? Or we use uuid here to synchronize, instead of row_id? If yes that makes sense 👍
Also I'm assuming the error record will be stored as jsonb? Is our jsonb implementation still fixed size, so it could lead to data loss?
I guess  another advantage of ERT per executor is that we won't encounter this?
Instead, in the error_table-per-operator fashion, we might need to provide an extra 'view' for that purpose. Note that this is not an actual 'view' because when a new materialized view appears, the view needs to be updated accordingly, which is somehow dirty to me.
For 3. I didn't quite get it. Why don't we just CREATE VIEW to simulate the rw_error_table which contains all recent errors. Rather than CREATE MATERIALIZED VIEW. This runs the computation adhoc, rather than incrementally?
No, I'm not using MATERIALIZED VIEW.
A "View" in database is defined by a SQL query: create view <view_name> as <query>. However, here we have multiple error_tables with different schema:
table name                   schema
error_table_of_operator_a    col1 (int)  col2 (varchar)   error_message (varchar)
error_table_of_operator_b    col1 (int)  col2 (decimal) col3 (varchar)  error_message (varchar)
error_table_of_operator_c    col1 (varchar)  col2 (varchar) col3 (int)  error_message (varchar)
If we want to query from all error tables, then the query would be like:
/* note the columns are lost because they can't be merged into one result set */
select error_message from error_table_of_operator_a
union all
select error_message from error_table_of_operator_b
union all
select error_message from error_table_of_operator_c
Furthermore, if materialized view is created or dropped, you need to update the view accordingly
select error_message from error_table_of_operator_a
union all
select error_message from error_table_of_operator_b
union all
select error_message from error_table_of_operator_c
union all
select error_message from error_table_of_operator_d -- d is a new operator
That's why I said it's kind of dirty - We need to hook the DDL processes. Or, alternatively, give up the SQL approach and provide a hard-code access way. This is also dirty because we need to construct such a query plan in hard-code way, instead of running normal optimizer process. 😇
How do we synchronize access to the state table? Or we use uuid here to synchronize, instead of row_id? If yes that makes sense 👍
Yes, uuid should be the most practical solution.
Also I'm assuming the error record will be stored as jsonb? Is our jsonb implementation still fixed size, so it could lead to data loss?
Yes, JSONB is one of the option. Technically there is no reason of data loss. Actually I don't really care whether the record is kept losslessly, I think it's okay to truncate records to limit the size, because the error is only supposed for human users to read.
I guess another advantage of ERT per executor is that we won't encounter this?
True.