db0
db0 copied to clipboard
chore(deps): update all non-major dependencies
This PR contains the following updates:
Release Notes
libsql/libsql-client-ts (@libsql/client)
v0.9.0
v0.8.1
- Fix embedded replica sync WAL index path name , which caused "No such file or directory" for local sync in some cases (#244).
v0.8.0
- No changes from 0.8.0-pre.1.
v0.7.0
- Add configurable concurrency limit for parallel query execution (defaults to 20) to address socket hangup errors.
v0.6.2
- Fix compatibility issue with libSQL server versions that don't have migrations endpoint.
v0.6.1
- Add an option to
batch()
to wait for schema changes to finish when using shared schema.
vitest-dev/vitest (@vitest/coverage-v8)
v1.6.0
🚀 Features
- Support standalone mode - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5565 (bdce0)
- Custom "snapshotEnvironment" option - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5449 (30f72)
- benchmark: Support comparing benchmark result - by @hi-ogawa and @sheremet-va in https://github.com/vitest-dev/vitest/issues/5398 (f8d3d)
- browser: Allow injecting scripts - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5656 (21e58)
-
reporter: Support
includeConsoleOutput
andaddFileAttribute
in junit - by @hi-ogawa in https://github.com/vitest-dev/vitest/issues/5659 (2f913) - ui: Sort items by file name - by @btea in https://github.com/vitest-dev/vitest/issues/5652 (1f726)
🐞 Bug Fixes
- Keep order of arguments for .each in custom task collectors - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5640 (7d57c)
- Call
resolveId('vitest')
afterbuildStart
- by @hi-ogawa in https://github.com/vitest-dev/vitest/issues/5646 (f5faf) - Hash the name of the file when caching - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5654 (c9e68)
- Don't panic on empty files in node_modules - by @sheremet-va (40c29)
- Use
toJSON
for error serialization - by @hi-ogawa in https://github.com/vitest-dev/vitest/issues/5526 (19a21) -
coverage:
- Exclude
*.test-d.*
by default - by @MindfulPol in https://github.com/vitest-dev/vitest/issues/5634 (bfe8a) - Apply
vite-node
's wrapper only to executed files - by @AriPerkkio in https://github.com/vitest-dev/vitest/issues/5642 (c9883)
- Exclude
-
vm:
- Support network imports - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5610 (103a6)
🏎 Performance
- Improve performance of forks pool - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5592 (d8304)
- Unnecessary rpc call when coverage is disabled - by @AriPerkkio in https://github.com/vitest-dev/vitest/issues/5658 (c5712)
View changes on GitHub
v1.5.3
🐞 Bug Fixes
- Use package.json name for a workspace project if not provided - by @sheremet-va in https://github.com/vitest-dev/vitest/issues/5608 (48fba)
- Backport jest iterable equality within object - by @sukovanej in https://github.com/vitest-dev/vitest/issues/5621 (30e5d)
- browser: Support benchmark - by @hi-ogawa in https://github.com/vitest-dev/vitest/issues/5622 (becab)
- reporter: Use default error formatter for JUnit - by @hi-ogawa in https://github.com/vitest-dev/vitest/issues/5629 (20060)
View changes on GitHub
unjs/automd (automd)
v0.3.8
🚀 Enhancements
- Upgrade c12 with jiti v2 with esm support (a42d4d2)
🩹 Fixes
-
version
should be obtained automatically when set totrue
(#59)
📖 Documentation
- Add jsdocs for main exports (#55)
🏡 Chore
❤️ Contributors
- Byron (@byronogis)
- Pooya Parsa (@pi0)
- Max (@onmax)
drizzle-team/drizzle-orm (drizzle-orm)
v0.33.0
Breaking changes (for some of postgres.js users)
Bugs fixed for this breaking change
- [BUG]: jsonb always inserted as a json string when using postgres-js
- [BUG]: jsonb type on postgres implement incorrectly
As we are doing with other drivers, we've changed the behavior of PostgreSQL-JS to pass raw JSON values, the same as you see them in the database. So if you are using the PostgreSQL-JS driver and passing data to Drizzle elsewhere, please check the new behavior of the client after it is passed to Drizzle.
We will update it to ensure it does not override driver behaviors, but this will be done as a complex task for everything in Drizzle in other releases
If you were using postgres-js
with jsonb
fields, you might have seen stringified objects in your database, while drizzle insert and select operations were working as expected.
You need to convert those fields from strings to actual JSON objects. To do this, you can use the following query to update your database:
if you are using jsonb:
update table_name
set jsonb_column = (jsonb_column #>> '{}')::jsonb;
if you are using json:
update table_name
set json_column = (json_column #>> '{}')::json;
We've tested it in several cases, and it worked well, but only if all stringified objects are arrays or objects. If you have primitives like strings, numbers, booleans, etc., you can use this query to update all the fields
if you are using jsonb:
UPDATE table_name
SET jsonb_column = CASE
-- Convert to JSONB if it is a valid JSON object or array
WHEN jsonb_column #>> '{}' LIKE '{%' OR jsonb_column #>> '{}' LIKE '[%' THEN
(jsonb_column #>> '{}')::jsonb
ELSE
jsonb_column
END
WHERE
jsonb_column IS NOT NULL;
if you are using json:
UPDATE table_name
SET json_column = CASE
-- Convert to JSON if it is a valid JSON object or array
WHEN json_column #>> '{}' LIKE '{%' OR json_column #>> '{}' LIKE '[%' THEN
(json_column #>> '{}')::json
ELSE
json_column
END
WHERE json_column IS NOT NULL;
If nothing works for you and you are blocked, please reach out to me @AndriiSherman. I will try to help you!
Bug Fixes
- [BUG]: boolean mode not working with prepared statements (bettersqlite) - thanks @veloii
- [BUG]: isTable helper function is not working - thanks @hajek-raven
- [BUG]: Documentation is outdated on inArray and notInArray Methods - thanks @RemiPeruto
v0.32.2
- Fix AWS Data API type hints bugs in RQB
- Fix set transactions in MySQL bug - thanks @roguesherlock
- Add forwaring dependencies within useLiveQuery, fixes #2651 - thanks @anstapol
- Export additional types from SQLite package, like
AnySQLiteUpdate
- thanks @veloii
v0.32.1
- Fix typings for indexes and allow creating indexes on 3+ columns mixing columns and expressions - thanks @lbguilherme!
- Added support for "limit 0" in all dialects - closes #2011 - thanks @sillvva!
- Make inArray and notInArray accept empty list, closes #1295 - thanks @RemiPeruto!
- fix typo in lt typedoc - thanks @dalechyn!
- fix wrong example in README.md - thanks @7flash!
v0.32.0
Release notes for [email protected]
and [email protected]
It's not mandatory to upgrade both packages, but if you want to use the new features in both queries and migrations, you will need to upgrade both packages
New Features
🎉 MySQL $returningId()
function
MySQL itself doesn't have native support for RETURNING
after using INSERT
. There is only one way to do it for primary keys
with autoincrement
(or serial
) types, where you can access insertId
and affectedRows
fields. We've prepared an automatic way for you to handle such cases with Drizzle and automatically receive all inserted IDs as separate objects
import { boolean, int, text, mysqlTable } from 'drizzle-orm/mysql-core';
const usersTable = mysqlTable('users', {
id: int('id').primaryKey(),
name: text('name').notNull(),
verified: boolean('verified').notNull().default(false),
});
const result = await db.insert(usersTable).values([{ name: 'John' }, { name: 'John1' }]).$returningId();
// ^? { id: number }[]
Also with Drizzle, you can specify a primary key
with $default
function that will generate custom primary keys at runtime. We will also return those generated keys for you in the $returningId()
call
import { varchar, text, mysqlTable } from 'drizzle-orm/mysql-core';
import { createId } from '@​paralleldrive/cuid2';
const usersTableDefFn = mysqlTable('users_default_fn', {
customId: varchar('id', { length: 256 }).primaryKey().$defaultFn(createId),
name: text('name').notNull(),
});
const result = await db.insert(usersTableDefFn).values([{ name: 'John' }, { name: 'John1' }]).$returningId();
// ^? { customId: string }[]
If there is no primary keys -> type will be
{}[]
for such queries
🎉 PostgreSQL Sequences
You can now specify sequences in Postgres within any schema you need and define all the available properties
Example
import { pgSchema, pgSequence } from "drizzle-orm/pg-core";
// No params specified
export const customSequence = pgSequence("name");
// Sequence with params
export const customSequence = pgSequence("name", {
startWith: 100,
maxValue: 10000,
minValue: 100,
cycle: true,
cache: 10,
increment: 2
});
// Sequence in custom schema
export const customSchema = pgSchema('custom_schema');
export const customSequence = customSchema.sequence("name");
🎉 PostgreSQL Identity Columns
Source: As mentioned, the serial
type in Postgres is outdated and should be deprecated. Ideally, you should not use it. Identity columns
are the recommended way to specify sequences in your schema, which is why we are introducing the identity columns
feature
Example
import { pgTable, integer, text } from 'drizzle-orm/pg-core'
export const ingredients = pgTable("ingredients", {
id: integer("id").primaryKey().generatedAlwaysAsIdentity({ startWith: 1000 }),
name: text("name").notNull(),
description: text("description"),
});
You can specify all properties available for sequences in the .generatedAlwaysAsIdentity()
function. Additionally, you can specify custom names for these sequences
PostgreSQL docs reference.
🎉 PostgreSQL Generated Columns
You can now specify generated columns on any column supported by PostgreSQL to use with generated columns
Example with generated column for tsvector
Note: we will add
tsVector
column type before latest release
import { SQL, sql } from "drizzle-orm";
import { customType, index, integer, pgTable, text } from "drizzle-orm/pg-core";
const tsVector = customType<{ data: string }>({
dataType() {
return "tsvector";
},
});
export const test = pgTable(
"test",
{
id: integer("id").primaryKey().generatedAlwaysAsIdentity(),
content: text("content"),
contentSearch: tsVector("content_search", {
dimensions: 3,
}).generatedAlwaysAs(
(): SQL => sql`to_tsvector('english', ${test.content})`
),
},
(t) => ({
idx: index("idx_content_search").using("gin", t.contentSearch),
})
);
In case you don't need to reference any columns from your table, you can use just sql
template or a string
export const users = pgTable("users", {
id: integer("id"),
name: text("name"),
generatedName: text("gen_name").generatedAlwaysAs(sql`hello world!`),
generatedName1: text("gen_name1").generatedAlwaysAs("hello world!"),
}),
🎉 MySQL Generated Columns
You can now specify generated columns on any column supported by MySQL to use with generated columns
You can specify both stored
and virtual
options, for more info you can check MySQL docs
Also MySQL has a few limitation for such columns usage, which is described here
Drizzle Kit will also have limitations for push
command:
-
You can't change the generated constraint expression and type using
push
. Drizzle-kit will ignore this change. To make it work, you would need todrop the column
,push
, and thenadd a column with a new expression
. This was done due to the complex mapping from the database side, where the schema expression will be modified on the database side and, on introspection, we will get a different string. We can't be sure if you changed this expression or if it was changed and formatted by the database. As long as these are generated columns andpush
is mostly used for prototyping on a local database, it should be fast todrop
andcreate
generated columns. Since these columns aregenerated
, all the data will be restored -
generate
should have no limitations
Example
export const users = mysqlTable("users", {
id: int("id"),
id2: int("id2"),
name: text("name"),
generatedName: text("gen_name").generatedAlwaysAs(
(): SQL => sql`${schema2.users.name} || 'hello'`,
{ mode: "stored" }
),
generatedName1: text("gen_name1").generatedAlwaysAs(
(): SQL => sql`${schema2.users.name} || 'hello'`,
{ mode: "virtual" }
),
}),
In case you don't need to reference any columns from your table, you can use just sql
template or a string
in .generatedAlwaysAs()
🎉 SQLite Generated Columns
You can now specify generated columns on any column supported by SQLite to use with generated columns
You can specify both stored
and virtual
options, for more info you can check SQLite docs
Also SQLite has a few limitation for such columns usage, which is described here
Drizzle Kit will also have limitations for push
and generate
command:
-
You can't change the generated constraint expression with the stored type in an existing table. You would need to delete this table and create it again. This is due to SQLite limitations for such actions. We will handle this case in future releases (it will involve the creation of a new table with data migration).
-
You can't add a
stored
generated expression to an existing column for the same reason as above. However, you can add avirtual
expression to an existing column. -
You can't change a
stored
generated expression in an existing column for the same reason as above. However, you can change avirtual
expression. -
You can't change the generated constraint type from
virtual
tostored
for the same reason as above. However, you can change fromstored
tovirtual
.
New Drizzle Kit features
🎉 Migrations support for all the new orm features
PostgreSQL sequences, identity columns and generated columns for all dialects
🎉 New flag --force
for drizzle-kit push
You can auto-accept all data-loss statements using the push command. It's only available in CLI parameters. Make sure you always use it if you are fine with running data-loss statements on your database
🎉 New migrations
flag prefix
You can now customize migration file prefixes to make the format suitable for your migration tools:
-
index
is the default type and will result in0001_name.sql
file names; -
supabase
andtimestamp
are equal and will result in20240627123900_name.sql
file names; -
unix
will result in unix seconds prefixes1719481298_name.sql
file names; -
none
will omit the prefix completely;
Example: Supabase migrations format
import { defineConfig } from "drizzle-kit";
export default defineConfig({
dialect: "postgresql",
migrations: {
prefix: 'supabase'
}
});
v0.31.4
- Mark prisma clients package as optional - thanks @Cherry
v0.31.3
Bug fixed
- 🛠️ Fixed RQB behavior for tables with same names in different schemas
- 🛠️ Fixed [BUG]: Mismatched type hints when using RDS Data API - #2097
New Prisma-Drizzle extension
import { PrismaClient } from '@​prisma/client';
import { drizzle } from 'drizzle-orm/prisma/pg';
import { User } from './drizzle';
const prisma = new PrismaClient().$extends(drizzle());
const users = await prisma.$drizzle.select().from(User);
For more info, check docs: https://orm.drizzle.team/docs/prisma
v0.31.2
-
🎉 Added support for TiDB Cloud Serverless driver:
import { connect } from '@​tidbcloud/serverless'; import { drizzle } from 'drizzle-orm/tidb-serverless'; const client = connect({ url: '...' }); const db = drizzle(client); await db.select().from(...);
v0.31.1
New Features
Live Queries 🎉
For a full explanation about Drizzle + Expo welcome to discussions
As of v0.31.1
Drizzle ORM now has native support for Expo SQLite Live Queries!
We've implemented a native useLiveQuery
React Hook which observes necessary database changes and automatically re-runs database queries. It works with both SQL-like and Drizzle Queries:
import { useLiveQuery, drizzle } from 'drizzle-orm/expo-sqlite';
import { openDatabaseSync } from 'expo-sqlite/next';
import { users } from './schema';
import { Text } from 'react-native';
const expo = openDatabaseSync('db.db', { enableChangeListener: true }); // <-- enable change listeners
const db = drizzle(expo);
const App = () => {
// Re-renders automatically when data changes
const { data } = useLiveQuery(db.select().from(users));
// const { data, error, updatedAt } = useLiveQuery(db.query.users.findFirst());
// const { data, error, updatedAt } = useLiveQuery(db.query.users.findMany());
return <Text>{JSON.stringify(data)}</Text>;
};
export default App;
We've intentionally not changed the API of ORM itself to stay with conventional React Hook API, so we have useLiveQuery(databaseQuery)
as opposed to db.select().from(users).useLive()
or db.query.users.useFindMany()
We've also decided to provide data
, error
and updatedAt
fields as a result of hook for concise explicit error handling following practices of React Query
and Electric SQL
v0.31.0
Breaking changes
Note:
[email protected]
can be used with[email protected]
or higher. The same applies to Drizzle Kit. If you run a Drizzle Kit command, it will check and prompt you for an upgrade (if needed). You can check for Drizzle Kit updates. below
PostgreSQL indexes API was changed
The previous Drizzle+PostgreSQL indexes API was incorrect and was not aligned with the PostgreSQL documentation. The good thing is that it was not used in queries, and drizzle-kit didn't support all properties for indexes. This means we can now change the API to the correct one and provide full support for it in drizzle-kit
Previous API
- No way to define SQL expressions inside
.on
. -
.using
and.on
in our case are the same thing, so the API is incorrect here. -
.asc()
,.desc()
,.nullsFirst()
, and.nullsLast()
should be specified for each column or expression on indexes, but not on an index itself.
// Index declaration reference
index('name')
.on(table.column1, table.column2, ...) or .onOnly(table.column1, table.column2, ...)
.concurrently()
.using(sql``) // sql expression
.asc() or .desc()
.nullsFirst() or .nullsLast()
.where(sql``) // sql expression
Current API
// First example, with `.on()`
index('name')
.on(table.column1.asc(), table.column2.nullsFirst(), ...) or .onOnly(table.column1.desc().nullsLast(), table.column2, ...)
.concurrently()
.where(sql``)
.with({ fillfactor: '70' })
// Second Example, with `.using()`
index('name')
.using('btree', table.column1.asc(), sql`lower(${table.column2})`, table.column1.op('text_ops'))
.where(sql``) // sql expression
.with({ fillfactor: '70' })
New Features
🎉 "pg_vector" extension support
There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using vector types, indexes, and queries, you have a PostgreSQL database with the
pg_vector
extension installed.
You can now specify indexes for pg_vector
and utilize pg_vector
functions for querying, ordering, etc.
Let's take a few examples of pg_vector
indexes from the pg_vector
docs and translate them to Drizzle
L2 distance, Inner product and Cosine distance
// CREATE INDEX ON items USING hnsw (embedding vector_l2_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_ip_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops);
const table = pgTable('items', {
embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
l2: index('l2_index').using('hnsw', table.embedding.op('vector_l2_ops'))
ip: index('ip_index').using('hnsw', table.embedding.op('vector_ip_ops'))
cosine: index('cosine_index').using('hnsw', table.embedding.op('vector_cosine_ops'))
}))
L1 distance, Hamming distance and Jaccard distance - added in pg_vector 0.7.0 version
// CREATE INDEX ON items USING hnsw (embedding vector_l1_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops);
const table = pgTable('table', {
embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
l1: index('l1_index').using('hnsw', table.embedding.op('vector_l1_ops'))
hamming: index('hamming_index').using('hnsw', table.embedding.op('bit_hamming_ops'))
bit: index('bit_jaccard_index').using('hnsw', table.embedding.op('bit_jaccard_ops'))
}))
For queries, you can use predefined functions for vectors or create custom ones using the SQL template operator.
You can also use the following helpers:
import { l2Distance, l1Distance, innerProduct,
cosineDistance, hammingDistance, jaccardDistance } from 'drizzle-orm'
l2Distance(table.column, [3, 1, 2]) // table.column <-> '[3, 1, 2]'
l1Distance(table.column, [3, 1, 2]) // table.column <+> '[3, 1, 2]'
innerProduct(table.column, [3, 1, 2]) // table.column <#> '[3, 1, 2]'
cosineDistance(table.column, [3, 1, 2]) // table.column <=> '[3, 1, 2]'
hammingDistance(table.column, '101') // table.column <~> '101'
jaccardDistance(table.column, '101') // table.column <%> '101'
If pg_vector
has some other functions to use, you can replicate implimentation from existing one we have. Here is how it can be done
export function l2Distance(
column: SQLWrapper | AnyColumn,
value: number[] | string[] | TypedQueryBuilder<any> | string,
): SQL {
if (is(value, TypedQueryBuilder<any>) || typeof value === 'string') {
return sql`${column} <-> ${value}`;
}
return sql`${column} <-> ${JSON.stringify(value)}`;
}
Name it as you wish and change the operator. This example allows for a numbers array, strings array, string, or even a select query. Feel free to create any other type you want or even contribute and submit a PR
Examples
Let's take a few examples of pg_vector
queries from the pg_vector
docs and translate them to Drizzle
import { l2Distance } from 'drizzle-orm';
// SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5;
db.select().from(items).orderBy(l2Distance(items.embedding, [3,1,2]))
// SELECT embedding <-> '[3,1,2]' AS distance FROM items;
db.select({ distance: l2Distance(items.embedding, [3,1,2]) })
// SELECT * FROM items ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5;
const subquery = db.select({ embedding: items.embedding }).from(items).where(eq(items.id, 1));
db.select().from(items).orderBy(l2Distance(items.embedding, subquery)).limit(5)
// SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items;
db.select({ innerProduct: sql`(${maxInnerProduct(items.embedding, [3,1,2])}) * -1` }).from(items)
// and more!
🎉 New PostgreSQL types: point
, line
You can now use point
and line
from PostgreSQL Geometric Types
Type point
has 2 modes for mappings from the database: tuple
and xy
.
-
tuple
will be accepted for insert and mapped on select to a tuple. So, the database Point(1,2) will be typed as [1,2] with drizzle. -
xy
will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database Point(1,2) will be typed as{ x: 1, y: 2 }
with drizzle
const items = pgTable('items', {
point: point('point'),
pointObj: point('point_xy', { mode: 'xy' }),
});
Type line
has 2 modes for mappings from the database: tuple
and abc
.
-
tuple
will be accepted for insert and mapped on select to a tuple. So, the database Line{1,2,3} will be typed as [1,2,3] with drizzle. -
abc
will be accepted for insert and mapped on select to an object with a, b, and c constants from the equationAx + By + C = 0
. So, the database Line{1,2,3} will be typed as{ a: 1, b: 2, c: 3 }
with drizzle.
const items = pgTable('items', {
line: line('line'),
lineObj: point('line_abc', { mode: 'abc' }),
});
🎉 Basic "postgis" extension support
There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using postgis types, indexes, and queries, you have a PostgreSQL database with the
postgis
extension installed.
geometry
type from postgis extension:
const items = pgTable('items', {
geo: geometry('geo', { type: 'point' }),
geoObj: geometry('geo_obj', { type: 'point', mode: 'xy' }),
geoSrid: geometry('geo_options', { type: 'point', mode: 'xy', srid: 4000 }),
});
mode
Type geometry
has 2 modes for mappings from the database: tuple
and xy
.
-
tuple
will be accepted for insert and mapped on select to a tuple. So, the database geometry will be typed as [1,2] with drizzle. -
xy
will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database geometry will be typed as{ x: 1, y: 2 }
with drizzle
type
The current release has a predefined type: point
, which is the geometry(Point)
type in the PostgreSQL PostGIS extension. You can specify any string there if you want to use some other type
Drizzle Kit updates: [email protected]
Release notes here are partially duplicated from [email protected]
New Features
🎉 Support for new types
Drizzle Kit can now handle:
-
point
andline
from PostgreSQL -
vector
from the PostgreSQLpg_vector
extension -
geometry
from the PostgreSQLPostGIS
extension
🎉 New param in drizzle.config - extensionsFilters
The PostGIS extension creates a few internal tables in the public
schema. This means that if you have a database with the PostGIS extension and use push
or introspect
, all those tables will be included in diff
operations. In this case, you would need to specify tablesFilter
, find all tables created by the extension, and list them in this parameter.
We have addressed this issue so that you won't need to take all these steps. Simply specify extensionsFilters
with the name of the extension used, and Drizzle will skip all the necessary tables.
Currently, we only support the postgis
option, but we plan to add more extensions if they create tables in the public
schema.
The postgis
option will skip the geography_columns
, geometry_columns
, and spatial_ref_sys
tables
import { defineConfig } from 'drizzle-kit'
export default defaultConfig({
dialect: "postgresql",
extensionsFilters: ["postgis"],
})
Improvements
Update zod schemas for database credentials and write tests to all the positive/negative cases
- support full set of SSL params in kit config, provide types from node:tls connection
import { defineConfig } from 'drizzle-kit'
export default defaultConfig({
dialect: "postgresql",
dbCredentials: {
ssl: true, //"require" | "allow" | "prefer" | "verify-full" | options from node:tls
}
})
import { defineConfig } from 'drizzle-kit'
export default defaultConfig({
dialect: "mysql",
dbCredentials: {
ssl: "", // string | SslOptions (ssl options from mysql2 package)
}
})
Normilized SQLite urls for libsql
and better-sqlite3
drivers
Those drivers have different file path patterns, and Drizzle Kit will accept both and create a proper file path format for each
Updated MySQL and SQLite index-as-expression behavior
In this release MySQL and SQLite will properly map expressions into SQL query. Expressions won't be escaped in string but columns will be
export const users = sqliteTable(
'users',
{
id: integer('id').primaryKey(),
email: text('email').notNull(),
},
(table) => ({
emailUniqueIndex: uniqueIndex('emailUniqueIndex').on(sql`lower(${table.email})`),
}),
);
-- before
CREATE UNIQUE INDEX `emailUniqueIndex` ON `users` (`lower("users"."email")`);
-- now
CREATE UNIQUE INDEX `emailUniqueIndex` ON `users` (lower("email"));
Bug Fixes
- [BUG]: multiple constraints not added (only the first one is generated) - #2341
- Drizzle Studio: Error: Connection terminated unexpectedly - #435
- Unable to run sqlite migrations local - #432
- error: unknown option '--config' - #423
How push
and generate
works for indexes
Limitations
You should specify a name for your index manually if you have an index on at least one expression
Example
index().on(table.id, table.email) // will work well and name will be autogeneretaed
index('my_name').on(table.id, table.email) // will work well
// but
index().on(sql`lower(${table.email})`) // error
index('my_name').on(sql`lower(${table.email})`) // will work well
Push won't generate statements if these fields(list below) were changed in an existing index:
- expressions inside
.on()
and.using()
-
.where()
statements - operator classes
.op()
on columns
If you are using push
workflows and want to change these fields in the index, you would need to:
- Comment out the index
- Push
- Uncomment the index and change those fields
- Push again
For the generate
command, drizzle-kit
will be triggered by any changes in the index for any property in the new drizzle indexes API, so there are no limitations here.
v0.30.10
New Features
🎉 .if()
function added to all WHERE expressions
Select all users after cursors if a cursor value was provided
async function someFunction(categories: string[] = [], views = 0) {
await db
.select()
.from(users)
.where(
and(
gt(posts.views, views).if(views > 100),
inArray(posts.category, categories).if(categories.length > 0),
),
);
}
Bug Fixes
- Fixed internal mappings for sessions
.all
,.values
,.execute
functions in AWS DataAPI
unjs/eslint-config (eslint-config-unjs)
v0.3.2
🏡 Chore
- Update unicorn plugin to 53 (0a944e4)
❤️ Contributors
- Pooya Parsa (@pi0)
v0.3.1
🩹 Fixes
- markdown: Override default rules (4765dd5)
🏡 Chore
- Remove prerelease script (pnpm why ?!) (a98c465)
❤️ Contributors
- Pooya Parsa (@pi0)
v0.3.0
unjs/jiti (jiti)
v1.21.6
🩹 Fixes
- Use internal cached modules only if loaded (#247)
v1.21.5
🩹 Fixes
From 1.21.4
v1.21.4
v1.21.3
🩹 Fixes
- Update mlly to ^1.7.1 (9adbcb3)
❤️ Contributors
- Pooya Parsa (@pi0)
v1.21.2
🩹 Fixes
- Pin mlly to 1.4.2 (#237)
❤️ Contributors
- Pooya Parsa (@pi0)
v1.21.1
🏡 Chore
- Update dependencies (0bd991b)
- Update dependencies (cfb106c)
- Update to eslint v9 (c11d953)
- Update deps and lockfile (95aa249)
- Run ci against 18 and 22 (65b4067)
- Lint (6f3bd76)
🤖 CI
- Skip extra checks (8fe6417)
❤️ Contributors
- Pooya Parsa (@pi0)
brianc/node-postgres (pg)
v8.12.0
- Add
queryMode
config option to force use of the extended query protocol on queries without any parameters.
v8.11.6
pnpm/pnpm (pnpm)
v8.15.9
: pnpm 8.15.9
Patch Changes
- Deduplicate bin names to prevent race condition and corrupted bin scripts #7833.
Platinum Sponsors
|
|
Gold Sponsors
|
|
|
|
Configuration📅 Schedule: Branch creation - "after 2am and before 3am" (UTC), Automerge - "after 1am and before 2am" (UTC). 🚦 Automerge: Enabled. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log. |