Exposed
Exposed copied to clipboard
Support performing operations on tables in multiple schemas
Fixes #145 Improves #805 This PR is to support shemas creation,cross-joins from different schemas and make selects from different schemas.
/** author is the table Author but in schema1 */
val author = Author.withSchema(schema1)
Some examples of usage
/** create tables in schemas */
val author = Author.withSchema(schema1)
val book = Book.withSchema(schema2)
SchemaUtils.create(author)
SchemaUtils.create(book)
/** insert{} */
author.insert {
it[Author.name] = "author2-name"
}
/** test inner-joins from different schemas */
(author innerJoin book).slice(Book.id, Author.id).selectAll().forEach {
println("${it[Author.id]} wrote ${it[Author.id]}")
}
/** test cross-joins from different schemas */
/** You can also use author or Author and book or Book in slice and to access results */
(author crossJoin book).slice(Book.id, Author.id).selectAll().forEach {
println("${it[Author.id]} wrote ${it[Book.id]}")
}
/** update table with schema. */
author.update({ Author.name eq "author1" }) {
it[Author.name] = "author"
}
/** delete from table with schema. */
author.deleteWhere {
Author.name eq "hichem"
}
object Author : IntIdTable("author") {
val name = varchar("name", 20)
}
object Book : Table("book") {
val id = integer("id")
val authorId = reference("authorId", Author).nullable()
override val primaryKey = PrimaryKey(id)
}
Hello, @hfazai ! Thank you for PR. I see a few problems here:
-
I think that approach with cloning table each time you call
withSchema
function is quite expensive as it uses reflection and if it is used in some "hot" place the user will face significant performance problem. What do you think about adding some simple LRU cache for table + scheme combination? -
AFAIK some users already mimic schemes by adding them into table names like
FooTable : Table("A.Foo")
. Those users can't usewithSchema
as it will returnNewSchema.A.Foo
as a new identity for a table. -
I'm against of changing
metadata.getColumns(databaseName, currentScheme, "%", "%")
tometadata.getColumns(null, null, "%", "%")
as I know few projects with thousands of schemas/catalogs and such change will make extracting metadata for a single table a very long operation. I guess it's ok to group table inoverride fun columns(vararg tables: Table): Map<Table, List<ColumnMetadata>> {
by schema and load data per-schema. In most cases, it still remains as a single request.
Also, please add a test with functions on columns from different schemas just to be sure that it works as expected.
Hi @Tapac ,
Thanks for the changes request.
- In the begining I thought of providing a simple way to access columns of a table in a specific schema that's why I used reflection. But yeah, you're right, it will cause performance problems in such cases. LRU cache idea seems good to me :+1: .
I also agree with (2) and (3)
@Tapac Done. Feel free to review the PR.
Any update on this?
Hey, I have a multi schema use case that would be easier to manage with this feature. Is any help needed with this?