drizzle-orm
drizzle-orm copied to clipboard
[FEATURE]: Allow to specify a particular schema name in the migrate() config for postgres
Describe what you want
Allow users to specify a target schema name for running migrations in postgres. Currently the migrate() intrinsically always target public schema. This enhancement will enable us to do migrations in multi-tenant use cases where each schema must have the same migrations applied to be in sync with the application requirements. Please take a look at the following code that is written as per the current functionality supported.
import { drizzle } from 'drizzle-orm/postgres-js'
import { migrate } from 'drizzle-orm/postgres-js/migrator'
import postgres from 'postgres'
import * as dotenv from 'dotenv'
dotenv.config({ path: './.env.local' })
const sql = postgres(process.env.DATABASE_URL!, { max: 1 })
const db = drizzle(sql)
const main = async () => {
await migrate(db, { migrationsFolder: 'drizzle' })
await sql.end()
process.exit(0)
}
main()
I propose to extend the migrate config to have additional property of schema or schemas to specify a single target schema or an array of schema names. In the multiple schema names scenario the migrate() function can iterate on the array and apply the migrations one by one on each schema.
The following code snippets show the proposed enhancements.
- Default behaviour, targets
publicschema
await migrate(db, { migrationsFolder: 'drizzle' })
- Target one specific schema
await migrate(db, { migrationsFolder: 'drizzle', schema: 'cust1' })
- Target multiple schemas
await migrate(db, { migrationsFolder: 'drizzle', schema: ['cust1', 'cust2', 'cust3'] })
- Target multiple schemas including 'public'
await migrate(db, { migrationsFolder: 'drizzle', schema: ['public', 'cust1', 'cust2', 'cust3'] })
I have modified the code and tested in my dev environment. It works satisfactorily. I am reproducing the full patched file "./drizzle-orm/src/migrator.ts" below for your reference and inclusion. The file is annotated with two types of in-line comments: a. <-- new line added b. <-- line modified
import crypto from "node:crypto";
import fs from "node:fs";
import path from "node:path";
export interface KitConfig {
out: string;
schema: string;
}
export interface MigrationConfig {
migrationsFolder: string;
migrationsTable?: string;
schema?: string | string[]; // <-- new line added
}
export interface MigrationMeta {
sql: string[];
folderMillis: number;
hash: string;
bps: boolean;
}
export function readMigrationFiles(config: string | MigrationConfig): MigrationMeta[] {
let schemaList = ["public"]; // <-- new line added
let migrationFolderTo: string | undefined;
if (typeof config === "string") {
const configAsString = fs.readFileSync(path.resolve(".", config), "utf8");
const jsonConfig = JSON.parse(configAsString) as KitConfig;
migrationFolderTo = jsonConfig.out;
} else {
migrationFolderTo = config.migrationsFolder;
if (config.schema) { // <-- new line added
schemaList = typeof config.schema === "string" ? [config.schema] : config.schema; // <-- new line added
} // <-- new line added
}
if (!migrationFolderTo) {
throw new Error("no migration folder defined");
}
const migrationQueries: MigrationMeta[] = [];
const journalPath = `${migrationFolderTo}/meta/_journal.json`;
if (!fs.existsSync(journalPath)) {
throw new Error(`Can't find meta/_journal.json file`);
}
const journalAsString = fs
.readFileSync(`${migrationFolderTo}/meta/_journal.json`)
.toString();
const journal = JSON.parse(journalAsString) as {
entries: { idx: number; when: number; tag: string; breakpoints: boolean }[];
};
for (const schema of schemaList) { // <-- new line added
for (const journalEntry of journal.entries) {
const migrationPath = `${migrationFolderTo}/${journalEntry.tag}.sql`;
try {
const query = fs
.readFileSync(`${migrationFolderTo}/${journalEntry.tag}.sql`)
.toString();
const result = query.split("--> statement-breakpoint").map((it) => {
return it;
});
migrationQueries.push({
sql: schema === "public" ? result : [`set schema '${schema}';`, ...result], // <-- line modified
bps: journalEntry.breakpoints,
folderMillis: journalEntry.when,
hash: crypto.createHash("sha256").update(query).digest("hex"),
});
} catch {
throw new Error(
`No file ${migrationPath} found in ${migrationFolderTo} folder`
);
}
}
} // new line added
return migrationQueries;
}
@aflatoon2874 Here is my angle on this matter -> https://github.com/drizzle-team/drizzle-orm/pull/1638 You can specify the schema in the connection url like :
const sql = postgres('postgres://username:password@host:port/database?search_path=schemaName');
I think your solution is the real deal but that needs to be implemented for all the dialect
Curious @g3r4n - changing the schema via the connection url looks like an exceedingly simple solution. Does it require https://github.com/drizzle-team/drizzle-orm/pull/1638 to be merged in order for it to be possible?
@kennyjwilli Yes because the state of the migration is saved at the same place for all schemas which isn't what you want with multi schema migration. All migration are "schemaless" so by just changing the connection string you can migrate the schema you want
@kennyjwilli Yes because the state of the migration is saved at the same place for all schemas which isn't what you want with multi schema migration. All migration are "schemaless" so by just changing the connection string you can migrate the schema you want
I noticed that your changes were merged in terms of the table and schemaName for the migration tracking, perhaps it would be better to just add a schemaId column to the table instead of having a different table/schema for each migrated schema?
Nonetheless, it looks like the URL search_path=schemaName or the changes mentioned above (https://github.com/drizzle-team/drizzle-orm/issues/908#issuecomment-1647867344) have not been implemented. How are you currently specifying the schema when doing your multi-tenant migrations?
I am in exactly the same position currently, requiring identical migrations per schema and initial db setup for new users on their own schema and have not found any elegant solution.
@bitaccesscomau I'm specifying the search_path in the connection url. I think it's better to keep the migration table in the schema you have run it. If you drop the schema you also drop the migration ran for this schema and ensure that your migration tables are always up to date for the schema they are referencing to.