Bug: Prisma Plugin Did Not Start in time
What happened?
Fresh clone with docker and I get these logs
Version
latest (ghcr.io/diced/zipline or ghcr.io/diced/zipline:latest)
What browser(s) are you seeing the problem on?
No response
Zipline Logs
2024-08-08 09:36:22,142 PM info [datasource] using Local(./uploads) datasource
2024-08-08 09:36:22,195 PM info [database::migrations] establishing database connection
2024-08-08 09:36:22,196 PM info [database::migrations] ensuring database exists, if not creating database - may error if no permissions
/zipline/node_modules/fastify/fastify.js:595
? appendStackTrace(err, new AVVIO_ERRORS_MAP[err.code](err.message))
^
FastifyError [Error]: fastify-plugin: Plugin did not start in time: 'prisma'. You may have forgotten to call 'done' function or to resolve a Promise
at manageErr (/zipline/node_modules/fastify/fastify.js:595:33)
at /zipline/node_modules/fastify/fastify.js:582:11
at Object._encapsulateThreeParam (/zipline/node_modules/avvio/boot.js:562:7)
at Boot.timeoutCall (/zipline/node_modules/avvio/boot.js:458:5)
at Boot.callWithCbOrNextTick (/zipline/node_modules/avvio/boot.js:440:19)
at release (/zipline/node_modules/fastq/queue.js:149:16)
at Object.resume (/zipline/node_modules/fastq/queue.js:82:7)
at /zipline/node_modules/avvio/boot.js:174:18
at /zipline/node_modules/avvio/plugin.js:275:7
at done (/zipline/node_modules/avvio/plugin.js:200:5) {
code: 'FST_ERR_PLUGIN_TIMEOUT',
statusCode: 500,
cause: AvvioError [Error]: Plugin did not start in time: 'prisma'. You may have forgotten to call 'done' function or to resolve a Promise
at Timeout._onTimeout (/zipline/node_modules/avvio/plugin.js:122:19)
at listOnTimeout (node:internal/timers:569:17)
at process.processTimers (node:internal/timers:512:7) {
code: 'AVV_ERR_READY_TIMEOUT',
fn: <ref *1> [AsyncFunction: prismaPlugin] {
default: [Circular *1],
prisma: [Circular *1],
[Symbol(skip-override)]: true,
[Symbol(fastify.display-name)]: 'prisma',
[Symbol(plugin-meta)]: {
name: 'prisma',
fastify: '4.x',
decorators: { fastify: [ 'config' ] },
dependencies: [ 'config' ]
}
}
}
}
Node.js v18.16.0
2024-08-08 09:36:45,543 PM info [datasource] using Local(./uploads) datasource
2024-08-08 09:36:45,597 PM info [database::migrations] establishing database connection
2024-08-08 09:36:45,598 PM info [database::migrations] ensuring database exists, if not creating database - may error if no permissions
Applying migration `20221030224830_oauth_fix_refresh`
2024-08-08 09:36:46,596 PM error [database::migrations] failed to migrate database
2024-08-08 09:36:46,596 PM error [database::migrations] Failed to migrate database... exiting...
2024-08-08 09:36:46,598 PM error [database::migrations] Error: P3018
A migration failed to apply. New migrations cannot be applied before the error is recovered from. Read more about how to resolve migration issues in a production database: https://pris.ly/d/migrate-resolve
Migration name: 20221030224830_oauth_fix_refresh
Database error code: 42704
Database error:
ERROR: index "OAuth_provider_key" does not exist
DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState(E42704), message: "index \"OAuth_provider_key\" does not exist", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("tablecmds.c"), line: Some(1286), routine: Some("DropErrorMsgNonExistent") }
at Object.<anonymous> (/zipline/node_modules/@prisma/migrate/dist/SchemaEngine.js:415:24)
at SchemaEngine.handleResponse (/zipline/node_modules/@prisma/migrate/dist/SchemaEngine.js:256:36)
at LineStream.<anonymous> (/zipline/node_modules/@prisma/migrate/dist/SchemaEngine.js:363:16)
at LineStream.emit (node:events:513:28)
at addChunk (node:internal/streams/readable:324:12)
at readableAddChunk (node:internal/streams/readable:297:9)
at Readable.push (node:internal/streams/readable:234:10)
at LineStream._pushBuffer (/zipline/node_modules/@prisma/migrate/dist/utils/byline.js:103:17)
at LineStream._transform (/zipline/node_modules/@prisma/migrate/dist/utils/byline.js:97:8)
at Transform._write (node:internal/streams/transform:175:8)
Browser Logs
N/A
Additional Info
I found this link around the error issue.. Not sure if it helps: https://github.com/fastify/fastify-cli/issues/422
I was able to get it working using the trunk tag but still an issue with the latest one
I just set up ZipLine hours ago. Everything worked just fine when I tested the docker-compose setup on my desktop. When I tried to do the same thing on the NAS, I got this very same error. I dropped everything in the database and re-started the compose project. I had to do it two more times, but at the 4th attempt, everything was working once again. I have no idea how this is possible, but hey, as long as it works... (It has to be some sort of concurrency execution thing during the DB migration. No other explanation of the ad-hoc error IMO)
Having the same problem
I just set up ZipLine hours ago. Everything worked just fine when I tested the docker-compose setup on my desktop. When I tried to do the same thing on the NAS, I got this very same error. I dropped everything in the database and re-started the compose project. I had to do it two more times, but at the 4th attempt, everything was working once again. I have no idea how this is possible, but hey, as long as it works... (It has to be some sort of concurrency execution thing during the DB migration. No other explanation of the ad-hoc error IMO)
Yeah I was doing it on my Synology nas using portainer and this was the error I was getting. Trunk version worked though so I just stuck with that.
I just set up ZipLine hours ago. Everything worked just fine when I tested the docker-compose setup on my desktop. When I tried to do the same thing on the NAS, I got this very same error. I dropped everything in the database and re-started the compose project. I had to do it two more times, but at the 4th attempt, everything was working once again. I have no idea how this is possible, but hey, as long as it works... (It has to be some sort of concurrency execution thing during the DB migration. No other explanation of the ad-hoc error IMO)
Yeah I was doing it on my Synology nas using portainer and this was the error I was getting. Trunk version worked though so I just stuck with that.
tried trunk version too, still wont work same error as described above.
@roafhtun as I said, I had to clean up Postgres, then docker compose down then docker compose up. On the 4th try it worked.
I wish I had more reliable solution but that was pretty much it.
Ohh, and when you clean postgres make sure you delete everything, not just the tables.
I also encountered this problem on Synology's nas. I can run it locally without problems. However, no matter how many times I reset and delete postgres and restart the project on Synology, it seems to be unable to work properly. I also tried the trunk version, but it didn't work either.
But the good news is that I found a way to solve the problem stably. Modify the volumes of postgres in the docker_compose file on both the local machine and Synology to map the host folder.
volumes:
- ./pg_data:/var/lib/postgresql/data
After the local docker successfully runs and the configuration is complete, replacing the local pg_data folder with the pg_data folder on Synology will make it work.
Closing this issue for now, I was not able to reproduce it since it was opened but it looks like some solutions have popped up above. Hopefully this is no longer an issue!