Automatically allocate a new chunk group instead of throwing error due to buffer size limits
As in #5918, we might throw an error when writing into columnar storage due to several buffer size limits enforced by safestring library or by Postgres.
While the buffer size limit for memcpy_s() is 256MB, memcpy() doesn't enforce such a limitation; so #6419 attempts improving this situation by switching to use plain memcpy() instead memcpy_s().
Note that even if we use memcpy(), memory allocators --such as enlargeStringInfo()-- still enforces a limit of 1GB when writing a chunk group into the disk.
Indeed, we could almost completely remove such memory limitations by deciding to allocate a new chunk group in ColumnarWriteRow() instead of throwing an error in the runtime (i.e., when trying to expand the latest chunk group based on GUC limits).
The workaround for this issue is to lower the columnar.chunk_group_row_limit setting. Possibly only for the dataload operation that is having trouble.
The workaround for this issue is to lower the
columnar.chunk_group_row_limitsetting. Possibly only for the dataload operation that is having trouble.
This doesn't work since the minimum is 1000 rows, so if values are larger than 1 MB it breaks anyway.