pglite-socket: crash when creating a large enum value.
I have a large enum value within my application with approx 1400~ values.
When it runs as a single CREATE TYPE public.big_enum as ENUM the database crashes with 2025-10-18 21:22:47.645 GMT [0] FATAL: terminating connection because protocol synchronization was lost
However when I break the enum into individual:
ALTER TYPE public.big_enum ADD VALUE 'rf_0000002';
ALTER TYPE public.big_enum ADD VALUE 'rf_0000003';
The db executes them without a problem.
Steps to Repdroduce:
Create a schema.sql file with entries up to 1000 values:
CREATE TYPE public.big_enum AS ENUM (
'rf_0000001',
'rf_0000002',
'rf_0000003',
'rf_0000004',
'rf_0000005',
'rf_0000006',
'rf_0000007',
......
)
Run the database: npx pglite-server --path=/tmp/.s.PGSQL.5432 --debug=5 -d data
Load the schema file in through PGSSLMODE=disable psql -h /tmp -f schema.sql
The database will crash.
Converting the schema to individual ALTER TABLE statements then works.
I observed the same issue when a large insert.
Thank you for this @itsWill !
One thing to try out would be to do the same insert directly with PGlite (ie without the pglite-server) and see if it still crashes.
Confirmed it works in PGLite with the following script:
import { PGlite } from '@electric-sql/pglite'
async function main() {
const db = new PGlite()
await db.query(`
CREATE TYPE public.big_enum AS ENUM (
'rf_0000001',
'rf_0000002',
.....
'rf_0001000'
);`)
console.log("create big enum succeeded")
let result = await db.query(`
SELECT unnest(enum_range(NULL::public.big_enum)) AS value;
`)
console.log(`returned rows: ${result.rows.length}`)
result = await db.query(`
CREATE TABLE public.enum_samples (id BIGSERIAL PRIMARY KEY, tag public.big_enum NOT NULL)
`)
console.log("created test table")
const labels = Array.from({ length: 1000 }, (_, i) => `rf_${String(i + 1).padStart(7, '0')}`);
const placeholders = labels.map((_, i) => `(CAST($${i + 1} AS public.big_enum))`).join(', ');
const largeInsert = `INSERT INTO public.enum_samples (tag) VALUES ${placeholders}`;
const res = await db.query(largeInsert, labels);
console.log(`Completed insert affected rows: ${res.affectedRows}`);
}
main();
The issue is on the socket-server side.
Thank you @itsWill this helps!
Will try to address it in a timely manner.
I am also seeing a similar issue. Basically any statement >64k (0xffff bytes) gets split into 2 tcp packets, and this code line https://github.com/electric-sql/pglite/blob/main/packages/pglite-socket/src/index.ts#L111 sends it straight to be processed as raw request.
I think that the TCP buffers need to be joined before this happens somehow? The max length of a PG message is defined by the 32bit length field, but each TCP packet is max 64k. Example with --debug=5:
I will try to send a proper PR later, but this worked for me. This solves the error.
let messageBuffer: Buffer = new Buffer(0);
let bytesRemaining = 0;
// Setup event handlers
this.log(`attach: setting up socket event handlers`);
socket.on('data', async data => {
if (bytesRemaining <= 0) {
if (data[0] === 'Q'.charCodeAt(0)) {
bytesRemaining = data.readUInt32BE(1);
} else {
bytesRemaining = 0;
}
}
messageBuffer = Buffer.concat([messageBuffer, data]);
bytesRemaining -= data.length;
if (bytesRemaining <= 0) {
try {
const result = await this.handleData(messageBuffer);
messageBuffer = new Buffer(0);
this.log(`socket on data sent: ${result} bytes`);
} catch (err) {
this.log('socket on data error: ', err);
}
}
});```